Modified on by MandarVaidya
Everyone who works in mission critical environments understand the need of having effective disaster recovery solution. Organizations demand disaster recovery operations fully automated and could be executed in a repeatable manner making them always ready for disaster situations. In addition, organizations always demanded seamless migration of applications across the sites for planned activities.
What is IBM and VMware’s joint DR solution in a virtualized environment?
IBM SAN Volume Controller (SVC) stretched cluster with VMware Site Recovery Manager (SRM) providing support for stretch cluster (announcement link) is an ideal combination for disaster recovery solution using IBM Storwize Family Storage Replication Adapter (SRA). It offers customers the ability to survive a wide range of failures transparently by planning for disaster avoidance, disaster recovery and mobility. This solution also offers planned live migration of applications running on virtual machines across the sites by orchestrating cross vCenter vMotion operations, enabling zero-downtime application mobility.
IBM SVC is an industry leading storage virtualization solution that can virtualize storage devices practically from all other storage vendors. With stretched cluster implementation, customers can enjoy active-active configurations with servers and ESXi hosts can connect to storage cluster nodes at all sites. It helps to create balanced workloads across all nodes of clusters and provides disaster recovery capabilities in case of site failures.
VMware SRM can be seamlessly configured with IBM SVC stretched clusters using IBM Storwize Family SRAs. For configuring the solution, SVC nodes are set up in stretched cluster configuration with ESXi servers able to access storage across both the sites. Quorum site is set up as per IBM SVC stretched cluster configuration requirements to resolve tie-break situation in case of link failure between the two main sites. Each VMware vCenter server is configured to manage the ESXi servers at each site. VMware SRM is installed on each site to configure and automate the disaster recovery solution.
How to configure solution?
There are documents available individually describing IBM SVC stretched cluster and VMware Site Recovery Manager and their benefits and respective configuration details. Purpose of this blog is to touch key steps and guidelines required to configure solution together for planned and unplanned downtimes.
What configuration is needed on SVC?
- Configure SVC in a stretched cluster mode
SVC supports stretched cluster configuration for some time now. Stretched cluster implementation allows the configuration of two nodes in an I/O group which are separated by a distance between two locations. These two locations (sites) can be two racks in a data center, two buildings in a campus, or two labs between supported distances. A third site is configured to host a quorum device that provides an automatic tie-break in the event of a potential link failure between the two main sites.
- Configure mirrored volume on a SVC stretched cluster
In SVC, volume mirroring feature is used to keep two physical copies of a volume. Each volume can belong to a different pool. In case of stretched cluster feature, a mirrored volume can be configured from the external storages across two physically separated sites.
Any special need for vCenter and SRM installation for supporting this solution?
SRM stretch cluster support takes advantage of vSphere’s ability to perform vMotion across the sites and across the vCenter server instances. Therefore, the two vCenter server instances will need to be configured (at protected and recovery sites) in enhanced linked mode to enable cross vCenter vMotion.
- SRM installation at protected and recovery sites
Install SRM server instances at protected and recovery sites and register SRM server instances with Platform Service Controllers at each site respectively.
Where does IBM SRA come into picture?
IBM Storwize Family SRA is a software add-on that integrates with SRM to run the failover. It extends SRM capabilities and uses replication and mirroring as part of the SRM comprehensive Disaster Recovery Planning (DRP) solution. IBM Storwize Family SRA is installed at protected and recovery site and it works with SRM instance to run failovers.
What’s new while creating vSphere storage policy?
Site Recovery Manager 6.1 adds a new type of protection group which is a storage policy-based protection group. Storage policy-based protection groups use vSphere storage profiles to identify protected datastores and virtual machines. They automate the process of protecting and unprotecting virtual machines and adding and removing datastores from protection groups. In order to easily identify IBM storage objects in vSphere inventory, you can create an IBM storage tag to create tag rule based storage policy and then associate stretched datastore to a storage policy using IBM storage tag based rules.
How to configure SRM for this solution?
- After pairing the sites together, IBM Storwize Family SRA should be registered with the SRM server instances at primary and recovery sites and then configure array manager using SVC nodes.
- Configure bidirectional Network Mappings, Folder Mappings, Resource Mappings, and Placeholder Datastores Mappings between protected and recovery sites.
- NEW ⇒ SRM 6.1 allows you to configure storage policy based protection group using storage policy mappings. When the storage policy at the protected site is mapped to storage policy at the recovery site, SRM places the recovered virtual machines in the vCenter server inventory and on datastores on the recovery site according to the storage policies that is mapped to on the recovery site.
- NEW ⇒ Storage policy based protection group enables automated protection of virtual machines that are associated with a storage policy which in turn are created by tagging them to reside on a particular datastore. When a virtual machine is associated or disassociated with a storage policy, SRM automatically protects or unprotects it.
- Configure a recovery plan using storage policy based protection group.
Why to test recovery plan?
The tested recovery plan make the environment ready for disaster recovery situations by running almost every aspect of a recovery plan. It is strongly recommended to test the recovery plan for planned migration and disaster recovery situations to avoid surprises.
Okay. I've recovery plan but what’s next??
Failover and reprotect recovery plan: After successfully testing a recovery plan, recovery plan is ready for either planned failover or disaster recovery situations. After fail over, recovery site becomes primary. SRM provides reprotect function to provide automated protection in a reverse direction.
Hopefully above steps will give overview of various configuration steps required to setup a solution and plan accordingly. For additional details related to the configuration, refer technical guide Implementing disaster recovery using IBM SAN Volume Controller and VMware Site Recovery Manager.
Disclaimer : These are my personal views and do not necessarily reflect that of my employer.
Modified on by SanjaySudam
Reducing Video Surveillance Storage Costs with the IBM Cloud Object Storage
As the use of camera security surveillance system grows due to increased security concerns, so the demand for the better image quality. Also the demand for the longer video retention times are growing due to the legal compliance. The increase in data produced due to higher image qualities and the longer retention means more data to be stored and archived at the storage level.
High Resolution Cameras, Longer Retention, No Worries
While the cost of recording devices is affordable, but the storage costs to meet the law enforcement regulations for evidence are more of concern. This leads to question “How can I reduce the Video Storage archiving costs without compromising the secured chain of custody for the evidence purpose?”
The Short answer is “Yes”. IBM Cloud Object Storage solves this problem by providing scalable solution at an affordable price.
What is the Solution?
IBM Storwize system is configured as the storage for keeping the live and near-line archiving surveillance storage content. Tiger Bridge from the Tiger Technology was configured on the Video Management Server (VMS) for moving the older content from the Storwize system volumes to the IBM Cloud Object Storage . Tiger Bridge is a secure and flexible software connector for Windows OS that transparently replicates and tiers data from a standard NTFS volume to a on-premise, off-premise target without affecting users, applications or workflows. IBM Cloud Object Storage was configured as the archiving storage for retaining the Video Surveillance content for longer retention periods. IBM Cloud Object Storage provides organizations the flexibility, scalability and simplicity required to store, manage and access today’s rapidly growing Video Surveillance data in a private, hybrid and public cloud environments.
This solution uses a modular and scalable architecture to address the growing data storage requirements. With them you get:
- IBM Storwize system for write intensive live recording and near-line archiving purpose.
- Tiger Bridge Connector for transparent data replication and tiering between IBM Storwize and Cloud Object Storage.
- IBM Cloud Object Storage for archiving the content for longer retention purpose.
- Optimized cloud solutions for on-premises private, or hybrid cloud service providers.
How It Works?
IBM Storwize system volumes are presented to the Windows Recording servers and configured as the local volumes using standard NTFS file system for storing the live and archive data. Tiger Bridge scans the data on the NTFS volume and transparently replicates and tiers the data to the IBM Cloud Object Storage system based on the parameters configured at the Tiger connector. For example you can configure to move the data based on the accessed or modified time stamps of the video archives.
Data Migration: A back ground Tiger Bridge process which replicates and tiers the data transparently from the IBM Storwize system (NTFS Volume) on the VMS system to the IBM Cloud Object Storage.
Recall: If security desk client requests the data which is not resident on the IBM Storiwze system, Tiger Bridge will retrieve the data transparently from the IBM Cloud Object Storage and streams the data to the security viewer without any loss of video frames.
Where it can be deployed?
Our time-tested solutions turn storage challenges into business advantage by reducing storage costs while reliably supporting both traditional and emerging cloud-born video surveillance workloads.
Below are the some use-cases, which will benefit from the IBM Cloud Object Storage solutions:
- Airport, Metro railway stations and city surveillance systems with longer retention requirements
- Cloud Service Providers (CSP) for hosting and providing the cloud based video surveillance solutions.
- Law enforcement departments looking for body camera solutions.
The IBM Storage Video Surveillance solution offers a modular and cost effective multi-tier cloud storage architecture to meet the next-generation video surveillance technologies.
Modified on by EricJohnson
For information technology (IT) customers looking to control site expansion costs, Microsoft offers their Azure cloud services. To appeal to larger customers with existing disaster recovery (DR) models that span multiple sites and use SAN solutions, Microsoft recently added Azure Site Recovery to their cloud services mix. This allows Microsoft to target a full spectrum of potential customers for their cloud services. For small customers that cannot afford the costs (both CAPEX and OPEX) associated with additional sites, traditional Azure services meet their DR requirements. However, in order to increase further Azure business revenue, Microsoft realized they needed to attract more large businesses with existing SAN infrastructures by appealing to cost-conscious CIOs facing common IT budget constraints. In essence, Microsoft cloud services continues to appeal to smaller customers who can not add data centers or sites and larger customers who wish to control site sprawl. Why bother with the cost and management headaches associated with maintaining additional sites for disaster recovery to meet customer service level agreements by protecting business critical data and services, when you can let Microsoft protect them for you and save your money and sanity for other high priority business needs? That is where Microsoft Azure and ASR services come into play.
Heralded as Microsoft’s cloud computing platform, Azure provides a simple, reliable, and extensible web-based interface or front end that is tightly integrated with a Microsoft System Center VMM and SQL Server back end. While the overall Azure model is multi-tiered, think of it, more or less, as a web management portal that uses Internet Information Services (IIS) at its foundation with VMM as the engine that drives its cloud tasks. VMM in turn, stores all of the cloud configuration and environment data in a SQL Server database. Of course, the Azure cloud itself is based on the Microsoft System Center application suite and consists of a Microsoft global network of secure data centers that offer compute, storage, network, and application resources to help protect your data and offset the high availability and administrative costs of building and managing additional sites. Even though Azure has multiple tiers, the storage array aspects of Azure Site Recovery using SAN replication for on-premises clouds and how replication differs from traditional Hyper-V replica implementations is the primary focus of this blog.
The Hyper-V replica feature is designed to protect VMs hosted by different servers using a built-in replication mechanism at the VM level. A primary site VM can asynchronously replicate to a designated replica site using an Ethernet network infrastructure including local area networks (LAN) or wide area networks (WAN). The designated replica remains offline in a stand-by state pending planned or unplanned VM failovers. After the initial VM copy is replicated to the secondary site, asynchronous replication occurs for only the primary VM changes. This network-based replication does not require shared storage or specific storage hardware and Hyper-V replicas can be established between stand-alone or highly available (HA) VMs, or a combination of both. The Hyper-V servers can be geographically dispersed and the VMs are not even required to belong to a domain. Thus, the Hyper-V replica requirements are rather basic and easy to implement yet are restricted to asynchronous network replication only.
Until recently, Azure could only leverage Hyper-V replicas using a network replication channel but now can use SAN replication between two on-premises VMM sites or clouds. With the addition of a Hyper-V replica SAN replication channel, synchronous replication can be used to eliminate asynchronous lag times and multiple VM consistency is possible using Azure Site Recovery. However, it is important to realize that asynchronous SAN replication behavior is similar to asynchronous Hyper-V network replication because after the initial VM copy is replicated to the secondary site, asynchronous SAN replication occurs for only the primary VM changes. However, if using IBM Real-time Compression, performance gains are also realized because less data is required to replicate over the SAN. No matter the storage options such as compression, with just a few clicks in the Azure Site Recovery management portal, simple orchestration of IBM XIV replication and disaster recovery for Microsoft Hyper-V environments can be automated in the form of planned and unplanned site failovers. In a practical sense, this collection of Azure SAN replication enhancements and disaster recovery functionality is an extension of past Microsoft System Center VMM storage automation features.
So with the introduction of Microsoft ASR cloud services, larger customers have the option to provide DR for their private clouds using IBM XIV SAN replication but they also can take advantage of hybrid cloud protection by subscribing to Microsoft Azure services. This services model gives customers the opportunity to protect their existing data center and SAN infrastructure investments while enticing them to purchase additional Microsoft Azure cloud services. Refer to Figure 1 below for a general Microsoft cloud layout that uses ASR with IBM XIV SAN replication:
Figure 1: Microsoft Azure Site Recovery using IBM XIV SAN replication general lab configuration
For further information about Microsoft ASR using IBM XIV Storage System Gen3, including step-by-step configuration processes, please refer to the following white paper:
Modified on by HemanandGadgil
Quick access to the copies of your data is challenging in traditional environment. New age use-cases such as cloud, Dev-Ops, analytics and reporting rely on quick access to data copies. Without automation in place for quick and reliable access to data copies, it can create severe impact to the operational efficiency for your business.
IBM Spectrum Copy Data Management used in conjunction with IBM Spectrum storage enables critical use-cases by providing in-place copy data management to modernize processes within existing infrastructure.
IBM Spectrum Copy Data Management with its ability to be deployed in 15 minutes inside a virtual machine can catalog the existing environment such as IBM Spectrum storage, VMware environment and applications such as Oracle or Microsoft SQL.
Following diagram shows the orchestrated copy data management of applications hosted on VMware virtualized environment and IBM Spectrum Storage.
It leverages native Global Mirror and FlashCopy of IBM Spectrum Virtualize to create snapshots, clones and replication.
IBM Spectrum Copy Data Management offers following benefits with IBM Spectrum Virtualize
1) Automatic creation and use of snapshots, replicas in existing IBM Spectrum Virtualize systems to ensure application consistency.
2) Simplify management of data copies by efficiently maintaining the versions of data copies residing on IBM Spectrum Virtualize systems.
3) It can be used to leverage high value use-case such as automated disaster recovery across cloud service providers.
4) With capability of integrating application centric VMware and Spectrum Virtualize systems together, it can cater to the modern use case of utilizing data copies in a Dev-Ops environment.
Workflow for IBM Spectrum Copy data management
To view the configuration steps of the demo and know more about integration of IBM Spectrum Copy Data Management with IBM Spectrum Storage and VMware visit
YouTube link : https://youtu.be/OwfUjclfKVc
Technical White paper : https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=TSW03547USEN
Modified on by MandarVaidya
The growing shift toward cloud computing and need for flexibility are making hybrid cloud solutions a serious business imperative. Hybrid cloud done right is an effective, highly agile, cost saving alternative to traditional storage only. For many organizations, disaster recovery is a primary need and the debut use case for bringing public cloud into their environment.
We have built some exciting hybrid cloud scenarios using a winning combination of IBM Spectrum Accelerate family offerings and VMware. IBM Spectrum Accelerate is software defined storage (SDS) from proven enterprise-class XIV technology. True to the calling card of SDS, it deploys on heterogeneous hardware – and IBM makes it deployable in every possible way: in private cloud, hybrid cloud and public cloud solutions, including as a service. It runs on purpose-built or customer-chosen commodity servers, can be hosted on public cloud infrastructures such as IBM Bluemix (IBM SoftLayer®), and is even available by a third party vendor as a pre-installed appliance. It can be licensed to run on XIV, FlashSystems A9000, A9000R systems for long-term investment value. It is available as a service, IBM Spectrum Accelerate on Cloud, ordered through IBM Passport Advantage® and supported by IBM Lab Services, for deploying by the terabyte on IBM Cloud.
The design and mature technology underlying IBM Spectrum Accelerate offer a faster path to deploying and managing a hybrid cloud built for agility, ease of use and cost savings, providing:
- Advanced VMware-centric hybrid cloud solutions including disaster recovery with XIV systems
- Exceptional performance, availability and advanced features from proven technology
- An efficient hyper converged infrastructure managed with an award winning GUI and vCenter
- The ease of hosting, moving and managing workloads in a single pane hybrid cloud environment
Coming to InterConnect 2017, Las Vegas, USA? Visit us in IBM Systems booth #344 (20-22 March 2017) for exciting live hybrid cloud demos:
For detailed implementation and configuration details, register and attend session #5028 "Implementing Disaster Recovery Solution across Hybrid Cloud using IBM Spectrum Storage" on Wednesday, 22nd March 2017 at InterConnect 2017.
1) Hybrid cloud disaster recovery solution leveraging VMware Site Recovery Manager and IBM Spectrum storage. Diagram below will provide architectural overview of the demo we are showing.
We can either have IBM XIV storage or IBM Spectrum Accelerate storage at the protected site and IBM Spectrum Accelerate at the recovery site. The Spectrum Accelerate instance can be located in another rack in the other data center, or span two physically separate data centers or be in a public cloud such as IBM Bluemix.
This hybrid cloud solution uses a fully tested and certified IBM XIV Site Recovery Adapter (SRA) to deliver business continuity across a wide range of failures. It further brings flexibility to an organization by enabling migration of virtual infrastructure workloads between data center and cloud.
2) Orchestrated and automated storage provisioning using vRealize Automation and vRealize Orchestrator with IBM Spectrum Accelerate family offerings spanning hybrid cloud
In this solution, we use the IBM® Spectrum Control Base Edition integration with VMware vRealize Orchestrator and vRealize Automation to take the service around infrastructure beyond orchestration. IBM Spectrum Control Base Edition is a centralized cloud integration system that consolidates IBM storage provisioning, virtualization, cloud, automation, and monitoring solutions through a unified server platform. By using VMware’s Advanced Service Designer feature in vRealize Automation and vRealize Orchestrator, together with IBM Spectrum Accelerate, we show how you can deliver XaaS (Anything-as-a-Service) across hybrid cloud deployments to your users.
Modified on by UdayasuryanKodoly
IBM and Hortonworks have announced the integrated analytics solution using IBM Spectrum Scale / Elastic Storage (ESS), and Hortonworks Data Platform (HDP). This announcement comes a week after a series of other updates to the IBM Spectrum and Cloud Object Storage product lines. It sees HDP being certified on IBM Spectrum Scale on both Power8 and x86.
Ed Walsh, general manager for IBM Storage and Software Defined Infrastructure said: “Every organization is becoming a digital organization. With this announcement, IBM is delivering a powerful platform to extend the use of data and for cognitive applications. This announcement shows our partner community IBM’s commitment to helps clients grow, develop, and transform the use of their own data with less complexity.”
Big Data & Analytics with Hadoop
For rapidly growing, unstructured data, Hadoop is the platform of choice for many organizations, enabling them to store, process, and analyze petabytes of information. Traditional data repositories cannot scale with unstructured big data workloads. Enterprises are adopting Hadoop for storing large chunks of data and running analytics to derive valuable insights. Current Market size for Hadoop $6B forecasted to grow to $50B by 2020.
HDP is the secure, enterprise-ready open source Apache™ Hadoop® distribution based on a centralized architecture (YARN). HDP addresses the complete needs of data-at-rest, powers real-time customer applications and delivers robust analytics that accelerate decision making and innovation.
- Pure play 100% open source distribution
- Hortonworks is #1 Apache Hadoop committer
- Greater than 1000 customers
- ODPi compliance
- Apache Spark is part of HDP distribution
Note: IBM Spectrum Scale is already certified and supported with IBM’s Hadoop distribution (IBM IOP/BigInsights)
Better storage for Hadoop
The default storage for Hadoop is HDFS. HDFS is Hadoop Distributed File System which runs on storage-rich servers (storage internal to servers).
Hadoop finds a value-added data platform in IBM Spectrum Scale and IBM Elastic Storage (ESS) which provide enhanced features and eliminate the need to have multiple data copies. Following table illustrates some ways IBM Spectrum Scale and IBM Elastic Storage (ESS) enhance Hadoop.
IBM Elastic Storage (ESS) and IBM Spectrum Scale
Clients have to copy data from enterprise storage to HDFS in order to run Hadoop jobs because Hadoop does not directly run on standard protocols like SMB/Object.
Reduce data center footprint
Spectrum Scale / ESS supports access to the same data through HDFS/NFS/SMB/Object. No data copying required for running Hadoop analytics.
HDFS is a shared nothing architecture, disks and cores grow in same ratio. Less efficient for high throughput jobs
Reduce cluster sprawl
ESS is a shared storage best known for its scalability and performance.
Costly data protection - Default uses 3-way replication. (Erasure coding in HDFS has some limitations and is perhaps more appropriate for cold data)
ESS Software RAID eliminates, need for 3-way replication to achieve data protection.
Bringing Hortonworks Data Platform to IBM Spectrum Scale or IBM Elastic Storage Server provides three key benefits: better storage efficiency, hybrid storage, and high performance. In terms of the first benefit, Elastic Storage Server uses erasure coding that eliminates the need to have multiple data copies. Second, the combined service extends on-premises storage to the cloud, delivering economic, security, and accessibility benefits. Third, Elastic Storage Server delivers high-speed data throughput.
Modified on by HemanandGadgil
Software-defined storage (SDS) is a key component for clients adapting to the modern data center and enable hybrid clouds. By decoupling the storage hardware and software that manages it, SDS empowers the clients to not only maintain the existing heterogeneous storage hardware but also simplifies the management by virtualizing the underlying storage hardware. Moreover, clients can also avail the advantages for data replication and seamless migration between heterogeneous storage platforms.
Disaster Recovery as a Service using IBM Spectrum Virtualize
The solution is built on IBM Spectrum Virtualize software running on Intel x86 processor-based servers at Recovery Site and IBM Storwize V7000 at Protected site (Production site). It leverages VMware Site Recovery Manager (SRM) to replicate between IBM Storwize V7000 and IBM Spectrum Virtualize. The diagram below shows the architectural overview of the environment.
Where can it be deployed
- Cloud and managed service providers looking to implement DRaaS to users with heterogeneous or dissimilar storage infrastructures
- Client looking to reduce capital expenditure (CAPEX) and operational cost for DR by using Software defined storage based approach
- Client looking to integrate with cloud orchestration and interoperate with on-premises existing storage system
- Organization looking for optimizing there existing heterogenous storage infrastructure with centralized storage management tool
Look for more resources here.
Disaster recovery as a service using IBM Spectrum Virtualize and VMware Site Recovery Manager integration
YouTube url: https://youtu.be/Gc3oaBkQbR4
Modified on by Shashank Shingornikar
After successfully implementing Real-time Compression feature in Storwize V7000, IBM has taken a step further bringing this patented technology in IBM XIV storage system. In a recent announcement of XIV 11.6.0 release, the Real-time Compression feature is seamlessly integrated in XIV storage system. Eliminating the need to add any extra hardware, the IBM Random Access Compression Engine (RACE) technology is now integrated with XIV storage system software stack to compress data before writing it to disk (above cache mechanism) resulting in up to 80% storage capacity savings.
It is designed with transparency in mind so that it can be implemented without changes to applications, hosts, networks, fabrics, or external storage systems. The solution is not visible to hosts, thus users and applications continue to work as is. To estimate the compression savings on an existing XIV non compressed volumes, Comprestimator utility is now integrated with XIV software.
What does Compression has in store for me ?
On the XIV system the compression ratio for all uncompressed volumes in the system is continuously estimated, even before enabling compression. The figure shows the various stages of volumes on the system ranging from uncompressed to potential savings and finally the total amount of compression on a volume
What are the Compression benefits for XIV ?
With the inline implementation of Real-time Compression the IBM XIV now delivers dramatic cost savings without need for extra hardware and provides following benefits :
- Increases usable capacity per rack typically to one Petabyte or more with Real-time Compression, greatly reducing effective cost per capacity
- Replicates compressed data faster and using less bandwidth, freeing up bandwidth for other uses
- Continuously displays predicted or actual compression ratios for all volumes
- Converts non-compressed volumes to compressed non-disruptively
So How does it work ?
Real-time compression implementation in XIV storage uses above cache architecture where data is compressed or de-compressed between the I/O interface and the cache. The compression node runs on every module of XIV taking advantage of parallel architecture of XIV. It compresses the portion of volume which only belongs to the module and thus distributing compression workload across all the modules of XIV. Hence, Real-time Compression implementation in XIV have minimal impact on the performance delivered by XIV.
Whenever write operations happen, data is compressed before they enter cache and acknowledgment is sent back to the host. During read operations, reads are stored compressed in cache and data is de-compressed when they are read from cache using RACE before passing it to the host. During XIV mirroring operation, data is compressed only once and compressed data is sent across the network reducing network bandwidth.
What will benefit more from Compression ?
- Database environments – DB2, Oracle, MS-SQL, and so on
- Database Applications – SAP, Oracle applications, and so on
- Server/Desktop Virtualization – KVM, VMware, Hyper-V, and so on
- Other compressible workloads – seismic, engineering, and so on
- Email – Microsoft Exchange, and so on
Are there any guidelines for Compression?
- IBM Real-time Compression is appropriate for data that has the following characteristics:
- Any data for which the Comprestimator tool estimates 25% or higher savings
- Volumes that contain data that is not already compressed (for example, un-compressed image and video files)
- Data for which application based encryption is not used or data that is not sent encrypted to the XIV.
Anything I can refer to ?
Real-time Compression not only works best with randomly accessed data such as database like IBM DB2, Oracle, MS-SQL Server but it also provides good results with server virtualization solutions like VMware, KVM, Hyper-V. When using Oracle databases, compressed volumes take advantage of above cache architecture compressing the writes seamlessly. A 57% compression has been observed during creation of a terabyte of data with minimal performance penalty. ( Publication : WP102551 )
VMware vSphere virtual machines can be seamlessly deployed on the compressed volumes, often with the compression savings of 50% to75%, allowing customers to reduce the storage capacity required for vitalized environments. ( Publication : WP102552 )
Microsoft Hyper-V virtualization helps customers maximize System x server and other resource use. Included in Windows Server, Microsoft Hyper-V virtualization helps reduce costs by allowing a greater number of application workloads to be hosted on fewer physical servers. When using Microsoft SQL Server 2012 SP1 OLTP data files and VM Windows Server 2012 R2 system files stored in Hyper-V virtual disk and XIV compressed volume achieved 73% compression savings ( Publication : WP102553 )
What about the performance ?
While the team tested the compression benefits and compiled the paper, another team from IBM Tel Aviv lab, had been busy with performance testing of the Oracle database hosted on the IBM XIV compressed volumes.
In the test setup, the team used, both compressed and uncompressed volumes configured on XIV for better parallelism. These volumes were mapped to the ESX system hosting the database server to create multiple VMFS file systems. A 5 TB database was created on the VMFS volumes using the Benchmark Factory tool. During the test run of 12 hour, load starting with 1,000 to a maximum of 30,000 users was made to put the system under a realistic production load. The I/O per second (IOPS) and response time information shown by the Benchmark Factory tool is shown by Figure below. Each point on the graph indicates an addition of 2500 users. The graph clearly indicates that the application has minimum impact in terms of response time when using the compressed volumes.
Blog Authors: Mandar Vaidya, Shashank Shingornikar
Modified on by SandeepZende
IBM Storage at your service courtesy of IBM Spectrum Control and VMware vRealize Automation
In today's emerging or, I would say, stabilizing world of IT cloud, everything needs to be delivered in "as a service" fashion. Because of this there is a growing demand for any IT solution to be available as a service. Organizations are thinking creatively to come up with the new IT solution as a service and then there are organizations who are developing cloud platforms which help other organizations quickly deploy their cloud solution. And this is where the race has begun.
There are end users or the business opportunists who do not want to waste time in designing and implementing a solution and then selling it. They want to start their business in a very short time. They are ready to rely on the organizations who provide this type of platforms to make their solution available in a short time. The platforms would help in building either public cloud or private cloud or both i.e. hybrid cloud depending of their need.
There are various vendors in the market who themselves provide the whole platform and also integrate with other vendors to build a unique platform which will help build the cloud. In today's world, it is very necessary that if a product is being developed, it has to take into account of how it can be integrated into the cloud or how it can enable cloud platform.
There are various enablers of cloud ranging from top layer to the bottommost layer of the cloud solution. IBM Spectrum Control is one of those enablers which provide the efficient infrastructure management for the cloud, virtualized and software defined storage. It simplifies and automates storage provisioning, capacity management, availability monitoring and reporting.
IBM Spectrum Control will also be an important factor in the success of IBM Spectrum Storage offerings by providing a control plane that is capable of provisioning and monitoring of storage in cloud or on-premise with the control of defining it in the software. The software defined storage characteristics of IBM Spectrum Control allows itself of receiving the storage definitions from the top layer with the help of interface made available for cloud providers or enablers. The interfaces are made available in the form of plug-ins developed for a specific cloud vendor. For example, IBM Spectrum Control Base Edition provides plug-ins for VMware vRealize Orchestrator, VMware VASA, VMware vRealize Operations, VMware vRealize Automation, etc.
With the help of IBM Spectrum Control Base Edition and VMware vRealize suite, a cloud architect can design various cloud solutions and can deliver these various solutions in a "as a service" fashion. One such very useful solution for cloud environments is "Storage as a service" using IBM storage. In this solution, an architect can design a service using VMware vRealize Automation, VMware vRealize Orchestrator, IBM Spectrum Control Base Edition and IBM XIV storage system and make storage available as a service wherein an end user, if entitled to, can avail the service by requesting a storage space for its VMs.
VMware vRealize Automation with its Advanced Services can deliver almost anything as a service (XaaS). The Advanced Services of vRealize Automation allows a cloud architect/administrator to advertise vRealize Orchestrator workflows as a service. Whatever workflows that are designed in the vRealize Orchestrator can be exposed from vRealize Automation. The 'IBM Storage plug-in for vRealize Orchestrator' which is a component of IBM Spectrum Control allows vRealize Automation to define or provision the storage as per the administrator or user need.
For more details on how "storage as a service" can be implemented using IBM Storage, IBM Spectrum Control and VMware vRealize, refer the technical paper.
Also refer the recorded demos below :
Demo: IBM Spectrum Control & VMware vRealize Automation - Configuration
The video demonstrates the configuration flow of the integration of IBM Spectrum Control Base Edition, IBM XIV, VMware vRealize Orchestrator and VMware vRealize Automation to enable a 'Storage as a service' solution. This video also demonstrates the creation of volume, mapping a volume and creation of datastore from the vRealize Automation web console.
Demo: IBM Spectrum Control & VMware vRealize Automation - Datastore Creation
This short video demonstrates the creation of datastore upon a user request from VMware vRealize Automation and IBM Spectrum Control Base Edition playing a part to seamlessly create a volume in storage for datastore.
For more information: https://www.ibm.com/systems/storage
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by MandarVaidya
From the good old days of DOS everyone knew the benefits compression. Back in those days disk capacity was scarce.
In those days, PC's had 40MB HDD capacity and programs like Foxpro 2.6 and Windows 3.1 could not be accomodated on a single disk. One had to remove Windows 3.1 installation to make space for Foxpro. Soon came newer version of DOS with a program called “Stacker”. Stacker had the possibility to compress the disk space data and thus more space was available for applications.
Gone are the days of 40MB HDD's and soon the disk capacity increased.
In the current era of technology the data is growing tremendously. Organizations especially are facing the issues with structured and unstructured data.
IBM has a wide variety of storages available ranging from small and medium business to large enterprises with scalable capacity, and in order to give its clients more value from Storage a compression enabled storage was introduced. IBM first introduced Random Access Compression Engine (R.A.C.E) technology in the IBM Real-time Compression(RtC) appliances. IBM then integrated the same technology in IBM Storwize V7000 family in 2013.
RtC is seamlessly integrated with Storwize V7000 system software stack to compress data before writing it to disk resulting in up to 80% storage capacity savings depending on the type of data. It is effectively equal to five times more capacity out of the same physical capacity in your system. RtC allows you to compress data even before it is written to your disks and is completely transparent to the applications and at the same time maintaining data consistency. It is implemented without any changes to applications, hosts, fabric, network..etc..
Since its inception, many users started implementing RtC with their Storwize V7000. Even though RtC provided great disk space savings by compressing the data, implementation of RtC in first generation of Storwize V7000 came with performance penalty. When enabled, RtC used significant processing power of the system causing performance bottlenecks and thus the benefit offered by RtC was dwarfed by these performance issues.
IBM addressed this issue in the next generation of Storwize V7000 system by making use of hardware compression acceleration with Intel®QuickAssist Acceleration Technology, that provided dedicated processing power and greater throughput for compression.
With the new hardware compression acceleration and better hardware resources, the Storwize V7000 Gen 2 easily overcame the performance penalties seen with Storwize V7000 Gen 1 systems. The performance of Gen 2 compressed volumes exceeds the non compressed volumes of Storwize Gen 1 systems.
In order to showcase the benefits of Gen 2, benchmarking was performed with VMware's VMMark tool and Oracle databases with OLTP workloads.
Following benefits were observed on Storwize V7000 Gen2 over V7000 Gen1 for Oracle benchmarks -
- 70+% compression ratio for Oracle database files.
- Three times faster response time
- Five times faster in virtual disk (vdisk) read latency
- Four times faster managed disk (mdisk) response time
- Three time less managed disk (MDisk) write operations (compression reduces back-end I/O load, making the system more efficient, thus delivering better performance)
- With a higher number of processors, second generation of Storwize V7000 system is seamlessly able to support I/O activity with compression enabled
Following benefits were observed on Storwize V7000 Gen2 over V7000 Gen1 for VMMark benchmarks -
- Average 50% compression observed for Redhat and Windows virtual machines.
- e-Commerce workload shows 30% improvement in benchmarking scores
- e-Commerce workload shows 35 % less latency
- Mail server workload and Web application workload benchmarking scores were similar across both generations. However, lower processor utilization was observed on Gen2 even after running benchmarks over compressed volumes.
For more details, refer following ISV technical papers
Using IBM Storwize V7000 Real time compression feature with Oracle
Benefits of IBM Storwize V7000 Real-time Compression feature with VMware vSphere 5.5
Disclaimer : The thoughts expressed above are collective thoughts of Shashank Shingornikar and Mandar Vaidya. They do not necessary represent that of their employer.
Modified on by Shashank Shingornikar
Dockerizing Oracle Database
Docker is a next buzzword on the net now. While a lot of work has been done in dockerizing various applications under docker softwares like Oracle still poses challenges during installation / configuration / execution. This blog entry gives the users a flavour in integrating Oracle Database for running in Dockerized environment.
Are you sure this can be done ?
Yes. Not only can be done, when its available you'll hardly notice any difference as compared to instance / database available on a bare metal / VM.
Should this be done ?
For running Oracle Database production Instances we suggest NOT. Oracle will not support this officially.
So, what are the ideal use case ?
The use cases for this could be, creating an environment for testing PSU or CPU patch provided by Oracle or think of it as an environment for your developers who has a specific requirement or even when you want to create a training environment. Even your experienced DBA's will find it handy as a sandbox for personal testing.
What does the flow looks like ?
In the POC environment we made this a multi step process for better understanding and granular control over the process. Here is the basic flow.
Platform Image (PI) is the base image on which customization's are done using Dockerfile. PI is read only in nature hence in order to customize it, Docker creates an intermediate container which holds only the changes made to PI. These intermediate containers when saved they'll save the state in the form of read-only images. The process continues till a ORACLE_HOME image is created containing Oracle binaries.
Database instances are spanned using this image. In order to create / save the database on a persistent storage, a data only container can be created. The data container together with container based on ORACLE_HOME will be used to create database instance and database underneath.
Here is how the final picture will look like. In this environment two 12c docarized database instances are running. The CDB database looks like the as shown in the picture above. In order to create / clone / plug in the PDB's a NFS mount point is mapped from Storwize V7000 system on Docker host.
What do you say are the advantages of this ?
There are several advantages of this. Although there are multiple images, each of this image being read only gives a consistent starting point every time a container is spawned on the image. Each container has its own namespace isolation consisting of PID, Network, Control groups (CGROUPS). Thus each container when running behaves like a independent host in-spite of the fact that it is based on same image. The images can be saved or pushed to an on premise repository thus making them readily available to another docker host. The images can be moved across various platforms such as servers across a network, laptop or even on public cloud space such as Amazon Web Services.
Modified on by SandeepZende
Medical Imaging is one of the top developments that is changing the way the medical field is looked upon. For example, CT and MRI technology is the one of the most significant medical innovation that is feeding the medical imaging. The advances like these are letting medical practitioners or researchers solve the mystery of various diseases and enable precise treatments. With these advances, the influence of medical imaging on healthcare industry is growing. The usage of medical imaging is growing beyond diagnostics by entering the areas of prevention, research and therapy.
With that, there have been a surge in different modalities which enable capture medical images. The medical images are in the form of DICOM files. With the various options available for producing medical images, lot of medical imaging data is being generated which needs to be stored for subsequent treatment or consultation or research. Some data may be of immediate use or may be used in the future if the disease reoccurs.
There are few vendors in the market who provide archiving of medical images and GE is one of the prominent one. With GE Healthcare Centricity Enterprise Archive one can save or archive the medical image files on IBM Spectrum Scale running on IBM Elastic Storage Server. IBM Spectrum Scale can be configured with IBM Transparent Cloud Tiering which enables to move older or less used files from IBM Spectrum Scale to IBM Cloud Object Storage or any other cloud access supported by IBM Transparent Cloud Tiering. The same medical images or files can be seamlessly restored from the cloud using this solution.
The diagram below is the architecture of the whole solution.
In this solution, GE Centricity Clinical Archive saves the medical imaging data on IBM Spectrum Scale running on IBM Elastic Storage Server and the same data is then transferred in the form of objects using IBM Transparent Cloud Tiering running in IBM Spectrum Scale to IBM Cloud Object Storage. The policies can be written in IBM Spectrum to define which all medical images that need to be transferred to and from cloud.
For complete information on the solution, you can refer this paper.
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by MandarVaidya
If you have virtualized your datacentre server resources on a VMware platform, it becomes important for other partnering resources such as storage or network to align with VMware virtualization technology. For example, if you want to use storage for your VMware virtualized servers, it is important that your storage supports VMware APIs such as vStorage API for Array Integration (VAAI), vSphere Storage API for Storage Awareness (VASA) to take advantage of VMware integration with storage resources. The users who have deployed their VMware virtual infrastructure using external storage arrays require features that enable efficient utilization of storage capacity. VMware introduced the SCSI UNMAP primitive in vSphere 5.0 to address this requirement.
IBM has long standing technical partnership with VMware to integrate each other’s technology for effective consumption of mutual resources benefiting end-users and customers. Along with other IBM storage offerings such as IBM Spectrum Virtualize storage family and IBM Spectrum Accelerate storage family, IBM DS8000 storage family supports VMware’s integration points such as VAAI and VASA.
What is VMware VAAI SCSI UNMAP?
It’s not new. Many of the VMware users know about this, still for those who are unaware about VAAI SCSI UNMAP -
If you are using storage vMotion or vSphere snapshot consolidation/deletion or Virtual machine deletion on a thin provisioned LUN from external storage array, then you always wondered why space is not released from the storage array. Prior to vSphere 5.0, space released from vSphere is never returned to the storage array for another LUN creation of for another storage host. It was not effective way of storage consumption in a VMware environment. With the introduction of VAAI SCSI UNMAP primitive, space released from vSphere on a thinly provisioned LUN is returned to the storage array. This feature is designed to effectively reclaim deleted space to meet continuing storage needs.
VAAI SCSI UNMAP on vSphere 5.0 is long back. Why writing it now? It’s simple , it’s now supported on IBM DS8880.
IBM DS8880 now supports VMware SCSI UNMAP primitive. With this, storage space is returned to IBM DS8880 for another use when space is released from vSphere layer using VAAI SCSI UNMAP. With this, the VMware host notifies the storage device about freed space on a thin-provisioned LUN by sending a SCSI UNMAP command. DS8880 will release any entire extents that have been allocated within that freed space. On DS8880, the extent size is either 16MB for small extents or 1 GB for large extents. Therefore, for any space to be released, the request must be for at least 16 MB. If the request is not aligned on extent boundaries, then only full extents contained within the requested range will be released.
What are the requirements of using VAAI SCSI UNMAP with DS8K?
- VAAI SCSI UNMAP feature is available with DS8880 with the latest release 8.2.3 (GA: 9th June, 2017).
- ESXi host version 5.0 or higher.
- Thin provisioned volumes on DS8880 with 16 MB extents
How do I use VAAI SCSI UNMAP with DS8880?
IBM DS8880 storage supports VAAI. Verify it from vCenter by browsing to the VMware datastore created on a DS8880 LUN. ‘Hardware Acceleration’ status as ‘Supported’ confirms that storage array supports VAAI.
After verifying DS8880 supports VAAI, now find out whether DS8880 supports SCSI UNMAP primitive. You need to use ESXi server command line interface for that.
- First find out NAA id of DS8880 LUN backing VMware datastore.
[root@esx:~] esxcli storage core device list
Display Name: IBM Fibre Channel Disk (naa.6005076308ffc54c0000000000001100)
Has Settable Display Name: true
Device Type: Direct-Access
- From this NAA ID, display the device specific details for VAAI SCSI UNMAP primitive supported by DS8880 for that device. It will tell us whether the underlying DS8880 supports SCSI UNMAP primitive for dead space reclamation.
[root@esx:~] esxcli storage core device vaai status get -d naa.6005076308ffc54c0000000000001100
VAAI Plugin Name:
ATS Status: supported
Clone Status: unsupported
Zero Status: supported
Delete Status: supported
The “Delete Status: supported” specifies that it can send SCSI UNMAP commands to the underlying DS8880 storage when space reclamation is requested.
OK!! VAAI SCSI UNMAP is supported on DS8880. Now, how do I release the space back to DS8880?
Assuming you are hosting a virtual machine on a datastore created from a thinly provisioned DS8880 LUN.
- Before issuing SCSI UNMAP, check the volume details and used volume capacity on a DS8880 volume.
dscli> showfbvol 1105
Date/Time: June 2, 2017 11:38:15 AM MST IBM DSCLI Version: 188.8.131.52 DS: IBM.2107-75LR811
datatype FB 512
cap (MiB) 409600
cap (2^30B) 400.0
cap (10^9B) -
cap (blocks) 838860800
reqcap (blocks) 838860800
realcap (MiB) 88944 ⇒Capacity before performing storage vMotion
migratingcap (MiB) 0
- After performing an operation on host that frees up the space (such as storage vMotion), run SCSI UNMAP command from ESXi host to the DS8880 storage to notify it to release the free space that was allocated for this LUN.
[root@esx:~] esxcli storage vmfs unmap -l D2
The "-l" option identifies the volume by the volume label.
For more details on the esxcli storage vmfs unmap command, see https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513
- Now check the space on the DS8880 volume and verify that it has released the storage space from DS8880 volume.
dscli> showfbvol 1105
Date/Time: June 2, 2017 11:39:00 AM MST IBM DSCLI Version: 184.108.40.206 DS: IBM.2107-75LR811
datatype FB 512
cap (MiB) 409600
cap (2^30B) 400.0
cap (10^9B) -
cap (blocks) 838860800
reqcap (blocks) 838860800
realcap (MiB) 79536 ⇒Capacity after performing VAAI SCSI UNMAP
migratingcap (MiB) 0
In summary, VMware VAAI SCSI UNMAP extends the usefulness of thin provisioning at the array level by maintaining storage efficiency throughout the life cycle of the vSphere environment. Now with IBM DS8880 storage supporting VAAI SCSI UNMAP primitive, it can reclaim the space released by vSphere and helps maintaining storage efficiency for the VMware deployments.
For more information refer IBM DS8880 product page https://www.ibm.com/systems/storage/hybrid-storage/ds8000
Modified on by MandarVaidya
Official release of VMware vSphere Virtual Volumes (VVol) in Q1 2015 has generated tremendous interest with customers. VVol extends VMware's software defined story to its storage partners and it completely changes the paradigm in which storage is consumed by the hypervisor. With VVol implementation, storage intensive tasks are off-loaded by the server hypervisor to application-aware, policy-driven storage. It also simplifies storage management, puts the virtual machines in charge of their own storage, and gives more fine-grained control over virtual machine storage. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management.
IBM is VMware’s strategic alliance partner and is a key design partner for VVol. IBM has announced support of VVol with XIV storage in lock-step with VMware’s general availability of vSphere 6.0 product. IBM’s integration of Virtual Volumes in XIV is based on the VMware API for Storage Awareness (VASA 2.0) delivered by IBM Spectrum Control Base Edition. This integration facilitates off-loading of the following storage-intensive virtual machine operations to IBM XIV storage with predictable performance and effective capacity utilization.
Snapshot operations of a virtual machine using Virtual Volumes datastore
Cloning of virtual machine using Virtual Volumes datastore
Storage migration of virtual machine from non- VVol to Virtual Volumes datastore
The figure below shows a pictorial representation of a Virtual Volumes implementation with XIV using IBM Spectrum Control Base Edition.
IBM Spectrum Control Base Edition implements the VMware Virtual Volumes APIs, providing a separate management bridge between vSphere and XIV storage. This management bridge separates the data path from the management path. IBM Spectrum Control Base Edition enables communication between vSphere stack (ESXi hosts, vCenter server and the vSphere Web Client) and IBM XIV storage. IBM Spectrum Control Base Edition maps virtual disk objects related to virtual machines and their derivatives such as snapshots and clones, directly to the XIV storage system.
ESXi hosts access Virtual Volumes through an intermediate point in data path, called the Protocol Endpoint (PE). It is also referred to as the Administrative Logical Unit (ALU) on XIV storage. ALU allows XIV storage to carry out storage-related tasks on behalf of the ESXi host.
Virtual Volumes reside on storage containers on XIV. Storage containers represent groupings of Virtual Volumes attached to a virtual machine. IBM Spectrum Control Base Edition associates a storage container with a single XIV pool. The storage containers are characterized by a storage service which combines storage capacity along with storage attributes such as encryption, thick/thin provisioning type. This storage container acts as a virtual datastore and matches application specific requirements of a virtual machine .
For detail step by step implementation of VVol in IBM XIV using IBM Spectrum Control Base Edition, refer to this technical paper : https://www.ibm.com/partnerworld/page/stg_ast_sto_wp-vmware-vsphere-virtual-volumes-using-xiv
IBM XIV delivers excellent levels of storage abstraction, easy automated provisioning and policy compliant capabilities through its integration with VVol. IBM Spectrum Control Base Edition delivers the VASA capabilities for XIV’s tight integration with VVol and plays a strategic role in IBM’s software defined storage initiative by providing storage agility and efficiencies required for today’s demanding application workloads.
Here are some videos you might also like to view to hear directly from VMware and IBM on our strategic partnership and joint VVol development efforts.
Powerful IBM XIV Storage Integration with VMware Virtual Volumes - Laura Guio
VMware vSphere Virtual Volumes and IBM XIV: A perfect fit
Additionally we have a Virtual Volume demo you should check out:
vSphere Virtual Volumes (VVOL) with IBM XIV Storage System
If you happen to onsite at the IBM Edge2015 event in Las Vegas the week of May 11th, be sure to attend the IBM-VMware session on Monday or Friday on this very topic:
Monday, 5/11 4:30 - 5:30 pm, San Polo 3503
Friday, 5/15 10:30 - 11:30 am, San Polo 3503
IBM Spectrum Control Base Edition: Orchestrate and Automate IBM Storage with VMware
Presenters: Yossi Siles, IBM and Rawlinson Rivera, VMware
Modified on by SandeepZende
With the surge of smartphones and tablets is also the surge of applications available for storing and sharing data. Employees of many organization tend to use the applications of their choice which are comfortable to them to store and share data. This puts organizations at risk of data security and puts pressure on them to think of having an official application which will allows its user to store and share data securely and also have the control of its data.
To gain the control over data, organizations need a very robust, reliable and easy to integrate storage devices for its file sharing applications. Along with security and control over its data, organizations are looking for additional services or features which can enhance the productivity of its employees.
Citrix offers ShareFile—an enterprise-class, IT-managed, secure file sync and sharing service. ShareFile offers IT the ability to control sensitive corporate data while meeting the mobility and collaboration needs of users and the data security requirements of the enterprise.
Citrix provides multiple options to store the data be it on premise, in the cloud or a combination of both to meet the needs for data sovereignty, compliance, performance and costs. For organizations that require increased data protection, ShareFile offers customers the ability to encrypt data with their own encryption keys.
IBM has more than one option to take care of the on-premise storage solution and one being the answer to a highly scalable file sync and share solution and that is IBM Spectrum Scale. The other storage option is IBM Storwize V7000 Unified which provides a unique combination of file as well as block storage option to small and medium level file sync and share solutions.
ShareFile extends an organizations’ data strategy to include existing network file drives, SharePoint and OneDrive for Business - allowing a single point of access to all data sources. StorageZone Connectors makes it easy to securely access documents which otherwise cannot be accessed outside of corporate networks or on mobile devices. Access any enterprise content management (ECM) system with StorageZone Connectors SDK, expanding the types of data users can access and edit on-the-go via ShareFile.
Advanced security features including remote wipe, device lock, passcode protection, white/black listings and data expiration policies allow you to determine exactly how sensitive data is stored, accessed and shared. Track and log activity in real-time and create custom reports to meet compliance requirements.
While IBM Spectrum Scale brings the scalability and performance with it, it also can add the value by using the features below:
- File encryption and secure erase
- Transparent flash cache
- Network performance monitoring
- Active File Management (AFM) parallel data transfers
- NFS version 4 support and data migration
- Backup and restore improvements
- File Placement Optimizer (FPO) enhancements.
Other features of IBM Storwize V7000 Unified:
- IBM Storage Mobile Dashboard
- Dynamic Migration
- IBM Easy Tier
- Thin provisioning
- Flash drives
- Active File Management (AFM) parallel data transfers
- IBM HyperSwap
- IBM Real-time Compression
- Encryption for virtualized storage
Below is the high level flow diagram of the solution using IBM Spectrum Scale
For the high level overview of Citrix ShareFile and IBM Storage Systems, follow the link below:
For more information of a solution with IBM Spectrum Scale, follow the link below:
For more information of a solution with IBM Storwize V7000 Unified, follow the link below:
Disclaimer: Above are my personal thoughts and not necessarily of my employer.