Modified on by SanjaySudam
Reducing Video Surveillance Storage Costs with the IBM Cloud Object Storage
As the use of camera security surveillance system grows due to increased security concerns, so the demand for the better image quality. Also the demand for the longer video retention times are growing due to the legal compliance. The increase in data produced due to higher image qualities and the longer retention means more data to be stored and archived at the storage level.
High Resolution Cameras, Longer Retention, No Worries
While the cost of recording devices is affordable, but the storage costs to meet the law enforcement regulations for evidence are more of concern. This leads to question “How can I reduce the Video Storage archiving costs without compromising the secured chain of custody for the evidence purpose?”
The Short answer is “Yes”. IBM Cloud Object Storage solves this problem by providing scalable solution at an affordable price.
What is the Solution?
IBM Storwize system is configured as the storage for keeping the live and near-line archiving surveillance storage content. Tiger Bridge from the Tiger Technology was configured on the Video Management Server (VMS) for moving the older content from the Storwize system volumes to the IBM Cloud Object Storage . Tiger Bridge is a secure and flexible software connector for Windows OS that transparently replicates and tiers data from a standard NTFS volume to a on-premise, off-premise target without affecting users, applications or workflows. IBM Cloud Object Storage was configured as the archiving storage for retaining the Video Surveillance content for longer retention periods. IBM Cloud Object Storage provides organizations the flexibility, scalability and simplicity required to store, manage and access today’s rapidly growing Video Surveillance data in a private, hybrid and public cloud environments.
This solution uses a modular and scalable architecture to address the growing data storage requirements. With them you get:
- IBM Storwize system for write intensive live recording and near-line archiving purpose.
- Tiger Bridge Connector for transparent data replication and tiering between IBM Storwize and Cloud Object Storage.
- IBM Cloud Object Storage for archiving the content for longer retention purpose.
- Optimized cloud solutions for on-premises private, or hybrid cloud service providers.
How It Works?
IBM Storwize system volumes are presented to the Windows Recording servers and configured as the local volumes using standard NTFS file system for storing the live and archive data. Tiger Bridge scans the data on the NTFS volume and transparently replicates and tiers the data to the IBM Cloud Object Storage system based on the parameters configured at the Tiger connector. For example you can configure to move the data based on the accessed or modified time stamps of the video archives.
Data Migration: A back ground Tiger Bridge process which replicates and tiers the data transparently from the IBM Storwize system (NTFS Volume) on the VMS system to the IBM Cloud Object Storage.
Recall: If security desk client requests the data which is not resident on the IBM Storiwze system, Tiger Bridge will retrieve the data transparently from the IBM Cloud Object Storage and streams the data to the security viewer without any loss of video frames.
Where it can be deployed?
Our time-tested solutions turn storage challenges into business advantage by reducing storage costs while reliably supporting both traditional and emerging cloud-born video surveillance workloads.
Below are the some use-cases, which will benefit from the IBM Cloud Object Storage solutions:
- Airport, Metro railway stations and city surveillance systems with longer retention requirements
- Cloud Service Providers (CSP) for hosting and providing the cloud based video surveillance solutions.
- Law enforcement departments looking for body camera solutions.
The IBM Storage Video Surveillance solution offers a modular and cost effective multi-tier cloud storage architecture to meet the next-generation video surveillance technologies.
Modified on by UdayasuryanKodoly
VersaStack unified storage is a single, integrated storage infrastructure with unified central management that simultaneously supports Fibre Channel, and network-attached storage (NAS) data formats and is centrally managed.
VersaStack solution for file and block is an infrastructure consisting of network, compute, and storage designed for quick deployment and rapid time to value. The solution includes Cisco UCS Integrated Infrastructure together with IBM® software-defined storage (SDS) solutions to deliver extraordinary levels of agility and efficiency.
Why VersaStack Unified (Block and File Storage Services)?
Enterprises need both block and file access on a single infrastructure platform to reduce their IT costs and at the same time, effectively and easily manage their massively growing data, and reduce the waste of their data center resources to increase their total cost of ownership (TCO).
VersaStack brings Cisco UCS Integrated Infrastructure (that includes: Cisco Unified Computing System (Cisco UCS) servers, Cisco Nexus switches, and Cisco UCS Director management and orchestration software), installed and configured with IBM Spectrum Scale™ SDS for file services and IBM System Storage®. VersaStack unified storage forms integrated infrastructure building blocks to host file services [such as Network File System (NFS), Server Message Block (SMB) and Common Internet File System (CIFS)] workloads. It provides faster delivery of solution applications, greater reliability, easier management, simplified installation, and lower IT infrastructure costs.
High-level overview of VersaStack unified storage
To serve logical volumes and files, the hardware and software to provide block and file services are integrated into one product. Viewed from its clients, one part of VersaStack unified storage is a storage server and the other part is a file server; and therefore, it is called Unified.
VersaStack uses internal storage to generate and provide logical volumes to storage clients. Using IBM Spectrum Virtualize, it can also manage the virtualization of external storage systems.
In addition to providing logical volumes, VersaStack facilitates the unified storage features to access file system space and files in those file systems. It uses file sharing protocols, file access protocols, and file transfer or file copy protocols, and so it acts as a file server. The file server subsystem of the VersaStack unified storage solution typically consists of two IBM Spectrum Scale powered file modules. These file modules perform the functions of the IBM Spectrum Scale software, initially running version 4.2.3 (at the time of writing this paper). The file modules of the VersaStack unified storage solution are the internal storage clients of the system. They use the logical volumes that are provided by the system to save files and share files to file clients. The base operating system (OS) of the file modules is Red Hat Enterprise Linux 6.8 or later. They use a distributed file system, the IBM Spectrum Scale, to store and retrieve files. To make the content of the IBM Spectrum Scale accessible to file clients, the file modules use the following file sharing and file access protocols:
- File Transfer Protocol (FTP)
- Hypertext Transfer Protocol Secure (HTTPS)
- Network File System (NFS)
- Secure Copy Protocol (SCP)
- Secure FTP (SFTP)
- Server Message Block (SMB)
Physical hardware configuration of VersaStack unified storage
In this configuration, the IBM Spectrum Scale version 4.2.3 is installed on Cisco UCS 5108 blade servers. The Cisco UCS blade servers are connected over 40 GB Ethernet to Cisco UCS 6248 series fabric interconnect switch modules. The IBM Storwize® V5030 storage controller is direct-attached to Cisco MDS 9148S 16G multilayer fabric switch modules with 16 Gbps Fibre Channel (FC) connectivity. Using a Cisco Nexus 93180YC Extender switch module, the 16 Gbps FC connectivity is extended to Cisco UCS 6248 series fabric interconnect switch modules. All the switch modules are configured with cluster-link to ensure maximum availability.
An alternate solution is the direct-attached storage area network (SAN) storage design, which provides a high redundancy, high-performance solution for the deployment of SDS solution. This solution design uses direct-attached Fibre Channel storage connectivity for compute enabling a simple, flexible, and cost-effective solution. This minimal configuration does not require the Cisco MDS 9148 Fibre Channel switches or the Cisco UCS 6248UP Fabric Interconnect switches, and the Cisco Nexus 93180YC-EX network switches are optional. Instead, it uses direct Fibre Channel connection between the Storwize V5030 and Cisco UCS 6324 Fabric Interconnect modules. This removes the requirement to have a dedicated FC switching environment, as all SAN switching and zoning functions are performed by the Cisco UCS 6324 Fabric Interconnect module and managed through the Cisco UCS Manager.
This published whitepaper provides additional technical and best practices about configuring file services (NFS, SMB, and CIFS) on VersaStack unified storage system.
VersaStack is a converged infrastructure solution from IBM and Cisco, delivering faster deployment and provisioning, consolidated management, simplicity, efficiency, and high availability. VersaStack is built with IBM storage solutions with best-in-class SDS technologies and Cisco UCS integrated infrastructure, Cisco servers, fabric interconnects, and switches (all managed using Cisco UCS Director). These are some of the most well-respected data center products available, built by two industry giants.
VersaStack unified storage facilitates network-attached storage file services using IBM Spectrum Scale software
By combining an industry leading server, network, and storage components with consolidated management, VersaStack can shrink deployment time, reduce your costs, and increase resource utilization. Built-in intelligence makes it easy for users and for IT and enables IT to offload management tasks without adding risk.
Modified on by UdayasuryanKodoly
IBM and Hortonworks have announced the integrated analytics solution using IBM Spectrum Scale / Elastic Storage (ESS), and Hortonworks Data Platform (HDP). This announcement comes a week after a series of other updates to the IBM Spectrum and Cloud Object Storage product lines. It sees HDP being certified on IBM Spectrum Scale on both Power8 and x86.
Ed Walsh, general manager for IBM Storage and Software Defined Infrastructure said: “Every organization is becoming a digital organization. With this announcement, IBM is delivering a powerful platform to extend the use of data and for cognitive applications. This announcement shows our partner community IBM’s commitment to helps clients grow, develop, and transform the use of their own data with less complexity.”
Big Data & Analytics with Hadoop
For rapidly growing, unstructured data, Hadoop is the platform of choice for many organizations, enabling them to store, process, and analyze petabytes of information. Traditional data repositories cannot scale with unstructured big data workloads. Enterprises are adopting Hadoop for storing large chunks of data and running analytics to derive valuable insights. Current Market size for Hadoop $6B forecasted to grow to $50B by 2020.
HDP is the secure, enterprise-ready open source Apache™ Hadoop® distribution based on a centralized architecture (YARN). HDP addresses the complete needs of data-at-rest, powers real-time customer applications and delivers robust analytics that accelerate decision making and innovation.
- Pure play 100% open source distribution
- Hortonworks is #1 Apache Hadoop committer
- Greater than 1000 customers
- ODPi compliance
- Apache Spark is part of HDP distribution
Note: IBM Spectrum Scale is already certified and supported with IBM’s Hadoop distribution (IBM IOP/BigInsights)
Better storage for Hadoop
The default storage for Hadoop is HDFS. HDFS is Hadoop Distributed File System which runs on storage-rich servers (storage internal to servers).
Hadoop finds a value-added data platform in IBM Spectrum Scale and IBM Elastic Storage (ESS) which provide enhanced features and eliminate the need to have multiple data copies. Following table illustrates some ways IBM Spectrum Scale and IBM Elastic Storage (ESS) enhance Hadoop.
IBM Elastic Storage (ESS) and IBM Spectrum Scale
Clients have to copy data from enterprise storage to HDFS in order to run Hadoop jobs because Hadoop does not directly run on standard protocols like SMB/Object.
Reduce data center footprint
Spectrum Scale / ESS supports access to the same data through HDFS/NFS/SMB/Object. No data copying required for running Hadoop analytics.
HDFS is a shared nothing architecture, disks and cores grow in same ratio. Less efficient for high throughput jobs
Reduce cluster sprawl
ESS is a shared storage best known for its scalability and performance.
Costly data protection - Default uses 3-way replication. (Erasure coding in HDFS has some limitations and is perhaps more appropriate for cold data)
ESS Software RAID eliminates, need for 3-way replication to achieve data protection.
Bringing Hortonworks Data Platform to IBM Spectrum Scale or IBM Elastic Storage Server provides three key benefits: better storage efficiency, hybrid storage, and high performance. In terms of the first benefit, Elastic Storage Server uses erasure coding that eliminates the need to have multiple data copies. Second, the combined service extends on-premises storage to the cloud, delivering economic, security, and accessibility benefits. Third, Elastic Storage Server delivers high-speed data throughput.
Modified on by UdayasuryanKodoly
Real-time collaboration and information sharing are key drivers of an enterprise’s productivity and innovation. Finding solutions to enable such dynamic sharing in an enterprise setting while maintaining control, however, can be a challenge. Some organizations look to consumer-grade, cloud-based file sharing options that offer the scalability, ease of use and access users want but store sensitive company data on external servers. This exposes organizations to risks of data leaks while limiting IT visibility. Other options include using existing enterprise collaboration and content management systems that might be challenging to maintain and cumbersome for users.
What exactly is the solution?
The combined IBM® Spectrum Scale for object storage and ownCloud software technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution. The ownCloud provides universal file access through a common file access layer to the IBM Spectrum Scale for object storage. The data files are kept in on-premise Spectrum Scale for object storage. ownCloud allows enterprises IT organizations to regain control of sensitive data with managed file sync and share which gives users universal file access to all of their data:
- Manage and protect data on-premise – Using IBM Spectrum Scale for object storage, with the complete software stack running on servers inside the data center, controlled by trusted administrators, managed to established policies.
- Integrate with existing IT system resources and policies – such as authentication systems, user directories, governance workflows, intrusion detection, monitoring, logging and storage management.
- Provide access through a comprehensive set of application programming interfaces ( API) and mobile libraries to customize system capabilities, meet unique service requirements, and accommodate changing user needs.
Why enterprises wants on-premise files sync and share solution?
Storing data off-premise may strip an organization’s ability to manage and control its data, or to ensure that data can be deleted. Few enterprises, however, are willing to forgo the benefits that cloud services provide in the advancement of agility and improved business processes. That leaves them struggling with how to use these technologies without importing security risks. They also recognize that users are increasingly able to migrate to external services that provide them greater flexibility and mobility than that offered by the enterprise.
By retaining on-premises manageability of file sync and share services, though, IT can use a private cloud solution to reconcile the need for cloud technology with the requirements for security, privacy, and regain control of sensitive data without unwanted exposure. With the ability to enhance control and govern access to files, IT administrators can set sophisticated rules for user and device connections and prevent access based upon those rules. Further, the capabilities and extensibility of on-premise file sync and share match the ease of use and complete access that first drove consumption of cloud services, yet IT controls sensitive assets in its own cloud environment.
Solution Lab testing
This solution consists of multiple servers installed with ownCloud server software. The ownCloud is a PHP web application running on top of Apache on Linux. This PHP application manages every aspect of ownCloud, from user-management to plug-ins, file sharing and storage. Attached to the PHP application is a database where ownCloud stores user information, user-shared file details, plug-in application states, and the ownCloud file cache (a performance accelerator). ownCloud accesses the database through an abstraction layer, enabling support for Oracle, MySQL, SQL Server, and PostgreSQL. Complete webserver logging is provided through webserver logs, and user and system logs are provided in a separate ownCloud log, or can be directed to a syslog file.
In the lab testing environment, an Active Directory (AD) is integrated with the ownCloud for user account provisioning. IBM Spectrum Scale for object storage is configured with local authentication. It is possible to configure IBM Spectrum Scale for object storage with enterprise directory server such as AD or Lightweight Directory Access Protocol (LDAP).
OpenStack Swift is installed on the protocol node(s) of the IBM Spectrum Scale for object storage.
IBM Spectrum Scale is a proven, enterprise-class file system, and OpenStack Swift is a best-of-breed object-based storage system. IBM Spectrum Scale for object storage combines these technologies to provide a new type of cloud storage that includes efficient data protection and recovery, proven scalability, and performance; snapshot and backup and recovery support; and information lifecycle management. Through these features, IBM Spectrum Scale for object storage can help simplify data management and allow enterprises to realize the full value of their data.
ownCloud is a self-hosted file sync and share server. It provides access to on-premises data through a web interface, sync clients while providing a platform to view, sync and share across devices easily, while gives the enterprises the ability to manage and control their data. ownCloud’s open architecture is extensible through a simple but powerful APIs for applications and plug-ins and works with seamlessly with IBM Spectrum Scale for object storage.
The combined IBM Spectrum Scale for object storage and ownCloud server technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution.
To learn more about the solution, please see the solution technical paper: https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud
Modified on by Prashant Avashia
Lately, patients are expecting their physicians to provide them a higher quality healthcare with intelligent, immediate insights from their radiological images, clinician notes and lab results. They are demanding simple diagnostic guidance, customized treatment options & immediate digital access to their personal medical information on their mobile devices, securely.
The primary driver for the transformation of radiological imaging services from volume-based imaging to patient-centric value-based imaging enables a closer dialogue between the radiologists, physicians & specialists. This transformation delivers superior patient experience, higher clinical accountability, relevant diagnostic insights & clinical decisions, minimizes medical errors, minimizes complexity of care delivery models, establishes consistent outcomes for chronic diseases, achieves tighter integration with non-radiology systems (EMR, Ambulatory, HIS, Lab Services, and more) and improved communication across the entire healthcare enterprise. As defined by American College of Radiology, Imaging 3.0 is a multiphase program initiative covering services, technology, tools and processes. Leveraging this initiative, radiologists adapt how they manage their practices, patient care and their own futures as the industry transforms itself from volume to value based imaging services.
The current reality is that many healthcare systems are not designed to facilitate easy information sharing across the enterprise – and particularly true with medical imaging data. With lack of standards, and with no clear integration and interoperability between imaging and non-radiology systems, it diminishes communication between physicians, radiologists and the specialists, with minimal or no access to real-time symptomatic evidence in the collaborative point-of-care process. Potentially, it contributes to process delays, clinical workflow inefficiencies, and diagnostic errors.
New Era Healthcare Environments with Mobile & Cloud capabilities demand digital transformation with better data economics. It is important to provide patients the ability to securely view, download & transmit health information quickly. IBM realizes many of these transformational challenges in a Healthcare Enterprise. IBM has successfully delivered pre-qualified Medical Imaging & Archiving solutions with leading Business Partners (in Healthcare Industry) on enabling and deploying their Cardiology, PACS and Enterprise Imaging applications on IBM Storage Systems including IBM System Storwize® V7000, DS8870®, IBM XIV®, IBM System Storage® San Volume Controller, and/or the IBM FlashSystemTM. IBM Spectrum StorageTM based Solutions deliver the potential of extracting insights from data volumes, and increase business agility by offering its functional capabilities as software, or as a cloud, or as a managed service offering. These all, are complete, ready-to-deploy, proven, high-performance solutions that help accelerate the time-to-value, with reliability, security and speed.
Many happy, satisfied IBM clients, including emergency centers, radiology departments, multi-specialty clinical groups, and hospital networks are currently running their imaging applications, on a variety of IBM Storage systems. They are running their imaging solutions, on premise at a local datacenter, globally at multiple datacenters, or delivering them from the cloud. They are leveraging DS8870 for enterprise critical applications, Storwize family for virtualization capabilities, FlashSystem for application and patient-data acceleration facilitating life-critical response, or XIV for cloud-specific standard based deployments. IBM storage systems also support built-in data encryption capabilities, instantaneous video imaging for angioplasty, laparoscopy, endoscopy or other clinical procedures, and real-time compression of non-imaging data.
As patient-centric delivery models continually evolve with the transition from volume based imaging services to value based imaging services, the IBM solution choices become very important in designing and implementing flexible storage architectures for imaging applications and facilitate reliable, secure and fast access to patient data, anywhere. IBM will continue to partner with leading Business Partners (in Healthcare) to deliver proven, superior storage solutions that will ultimately improve provider collaboration and better patient outcomes, and at significantly lower costs.
To learn more, I recommend checking out the following paper at URL: http://www.ibm.com/common/ssi/cgi-bin/ssialias?subtype=WH&infotype=SA&appname=SNDE_HL_HL_USEN&htmlfid=HLW03016USEN&attachment=HLW03016USEN.PDF
Modified on by MandarVaidya
Official release of VMware vSphere Virtual Volumes (VVol) in Q1 2015 has generated tremendous interest with customers. VVol extends VMware's software defined story to its storage partners and it completely changes the paradigm in which storage is consumed by the hypervisor. With VVol implementation, storage intensive tasks are off-loaded by the server hypervisor to application-aware, policy-driven storage. It also simplifies storage management, puts the virtual machines in charge of their own storage, and gives more fine-grained control over virtual machine storage. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management.
IBM is VMware’s strategic alliance partner and is a key design partner for VVol. IBM has announced support of VVol with XIV storage in lock-step with VMware’s general availability of vSphere 6.0 product. IBM’s integration of Virtual Volumes in XIV is based on the VMware API for Storage Awareness (VASA 2.0) delivered by IBM Spectrum Control Base Edition. This integration facilitates off-loading of the following storage-intensive virtual machine operations to IBM XIV storage with predictable performance and effective capacity utilization.
Snapshot operations of a virtual machine using Virtual Volumes datastore
Cloning of virtual machine using Virtual Volumes datastore
Storage migration of virtual machine from non- VVol to Virtual Volumes datastore
The figure below shows a pictorial representation of a Virtual Volumes implementation with XIV using IBM Spectrum Control Base Edition.
IBM Spectrum Control Base Edition implements the VMware Virtual Volumes APIs, providing a separate management bridge between vSphere and XIV storage. This management bridge separates the data path from the management path. IBM Spectrum Control Base Edition enables communication between vSphere stack (ESXi hosts, vCenter server and the vSphere Web Client) and IBM XIV storage. IBM Spectrum Control Base Edition maps virtual disk objects related to virtual machines and their derivatives such as snapshots and clones, directly to the XIV storage system.
ESXi hosts access Virtual Volumes through an intermediate point in data path, called the Protocol Endpoint (PE). It is also referred to as the Administrative Logical Unit (ALU) on XIV storage. ALU allows XIV storage to carry out storage-related tasks on behalf of the ESXi host.
Virtual Volumes reside on storage containers on XIV. Storage containers represent groupings of Virtual Volumes attached to a virtual machine. IBM Spectrum Control Base Edition associates a storage container with a single XIV pool. The storage containers are characterized by a storage service which combines storage capacity along with storage attributes such as encryption, thick/thin provisioning type. This storage container acts as a virtual datastore and matches application specific requirements of a virtual machine .
For detail step by step implementation of VVol in IBM XIV using IBM Spectrum Control Base Edition, refer to this technical paper : https://www.ibm.com/partnerworld/page/stg_ast_sto_wp-vmware-vsphere-virtual-volumes-using-xiv
IBM XIV delivers excellent levels of storage abstraction, easy automated provisioning and policy compliant capabilities through its integration with VVol. IBM Spectrum Control Base Edition delivers the VASA capabilities for XIV’s tight integration with VVol and plays a strategic role in IBM’s software defined storage initiative by providing storage agility and efficiencies required for today’s demanding application workloads.
Here are some videos you might also like to view to hear directly from VMware and IBM on our strategic partnership and joint VVol development efforts.
Powerful IBM XIV Storage Integration with VMware Virtual Volumes - Laura Guio
VMware vSphere Virtual Volumes and IBM XIV: A perfect fit
Additionally we have a Virtual Volume demo you should check out:
vSphere Virtual Volumes (VVOL) with IBM XIV Storage System
If you happen to onsite at the IBM Edge2015 event in Las Vegas the week of May 11th, be sure to attend the IBM-VMware session on Monday or Friday on this very topic:
Monday, 5/11 4:30 - 5:30 pm, San Polo 3503
Friday, 5/15 10:30 - 11:30 am, San Polo 3503
IBM Spectrum Control Base Edition: Orchestrate and Automate IBM Storage with VMware
Presenters: Yossi Siles, IBM and Rawlinson Rivera, VMware
Modified on by MandarVaidya
From the good old days of DOS everyone knew the benefits compression. Back in those days disk capacity was scarce.
In those days, PC's had 40MB HDD capacity and programs like Foxpro 2.6 and Windows 3.1 could not be accomodated on a single disk. One had to remove Windows 3.1 installation to make space for Foxpro. Soon came newer version of DOS with a program called “Stacker”. Stacker had the possibility to compress the disk space data and thus more space was available for applications.
Gone are the days of 40MB HDD's and soon the disk capacity increased.
In the current era of technology the data is growing tremendously. Organizations especially are facing the issues with structured and unstructured data.
IBM has a wide variety of storages available ranging from small and medium business to large enterprises with scalable capacity, and in order to give its clients more value from Storage a compression enabled storage was introduced. IBM first introduced Random Access Compression Engine (R.A.C.E) technology in the IBM Real-time Compression(RtC) appliances. IBM then integrated the same technology in IBM Storwize V7000 family in 2013.
RtC is seamlessly integrated with Storwize V7000 system software stack to compress data before writing it to disk resulting in up to 80% storage capacity savings depending on the type of data. It is effectively equal to five times more capacity out of the same physical capacity in your system. RtC allows you to compress data even before it is written to your disks and is completely transparent to the applications and at the same time maintaining data consistency. It is implemented without any changes to applications, hosts, fabric, network..etc..
Since its inception, many users started implementing RtC with their Storwize V7000. Even though RtC provided great disk space savings by compressing the data, implementation of RtC in first generation of Storwize V7000 came with performance penalty. When enabled, RtC used significant processing power of the system causing performance bottlenecks and thus the benefit offered by RtC was dwarfed by these performance issues.
IBM addressed this issue in the next generation of Storwize V7000 system by making use of hardware compression acceleration with Intel®QuickAssist Acceleration Technology, that provided dedicated processing power and greater throughput for compression.
With the new hardware compression acceleration and better hardware resources, the Storwize V7000 Gen 2 easily overcame the performance penalties seen with Storwize V7000 Gen 1 systems. The performance of Gen 2 compressed volumes exceeds the non compressed volumes of Storwize Gen 1 systems.
In order to showcase the benefits of Gen 2, benchmarking was performed with VMware's VMMark tool and Oracle databases with OLTP workloads.
Following benefits were observed on Storwize V7000 Gen2 over V7000 Gen1 for Oracle benchmarks -
- 70+% compression ratio for Oracle database files.
- Three times faster response time
- Five times faster in virtual disk (vdisk) read latency
- Four times faster managed disk (mdisk) response time
- Three time less managed disk (MDisk) write operations (compression reduces back-end I/O load, making the system more efficient, thus delivering better performance)
- With a higher number of processors, second generation of Storwize V7000 system is seamlessly able to support I/O activity with compression enabled
Following benefits were observed on Storwize V7000 Gen2 over V7000 Gen1 for VMMark benchmarks -
- Average 50% compression observed for Redhat and Windows virtual machines.
- e-Commerce workload shows 30% improvement in benchmarking scores
- e-Commerce workload shows 35 % less latency
- Mail server workload and Web application workload benchmarking scores were similar across both generations. However, lower processor utilization was observed on Gen2 even after running benchmarks over compressed volumes.
For more details, refer following ISV technical papers
Using IBM Storwize V7000 Real time compression feature with Oracle
Benefits of IBM Storwize V7000 Real-time Compression feature with VMware vSphere 5.5
Disclaimer : The thoughts expressed above are collective thoughts of Shashank Shingornikar and Mandar Vaidya. They do not necessary represent that of their employer.
Modified on by HemanandGadgil
You Wished !!!
There shall be no single point of failure and business continuity with no data loss in Multi-Cloud environments
There shall be scalability with reduced cost in Multi-Cloud Environments
There shall be centralized management tool for heterogeneous storage management in Multi-Cloud Environment
There shall be storage replication between on-premise and cloud storage
There shall be automatic and orchestrated way for creation and use of snapshots to use cloud data for Dev-Ops, reporting and analytics in Multi-Cloud Environment
All your wishes are the commandments for us ... you spoke ... we listened ... and now IBM is proudly introducing “IBM FlashSystem 9100 Multi-Cloud Solution for Business Continuity and Data Reuse”
" Looking to integrate with cloud orchestration and interoperate with on-premises existing storage system !
Thinking of doing Storage based replication from On-premise to Public Cloud ! "
Look no further !!
Think no further !!
Leveraging VMware Site Recovery Manager and IBM Spectrum Virtualize for Public Cloud, IBM’s Multi Cloud business continuity solution is catering to all you need.
The shift towards Cloud Infrastructure clients are looking to take advantage of scalability and reduce costs of public Cloud for workloads. Within infrastructure clients most often deploy backup, disaster recovery as one of the main use case in hybrid Cloud environments as cloud-based solutions.
Business continuity in Multi-Cloud environments provides advantages like,
- Reduce the Capex /Opex cost by running disaster recovery in Public Cloud leveraging pay-as-go model
- While Production data resides as on-premise the disaster recovery infrastructure can be built with required RTO/RPO and can be provisioned in cloud
- Organization looking for optimizing their existing heterogeneous storage infrastructure with centralized storage management tool between on-premise and cloud environments
The IBM FlashSystem 9100 Solution for Business Continuity and Data Reuse, a relationship enabled between on-premises IBM FlashSystem 9100 running IBM Spectrum Virtualize and an instance of Spectrum Virtualize for Public Cloud on IBM Cloud infrastructure.
Wonder how it work? Let the diagram below help you visualize the architectural overview of the solution
IBM Spectrum Virtualize for Public Cloud is software that is deployed on IBM Cloud infrastructure to virtualize cloud block storage for virtualized or physical application running in the public Cloud. Native IP based storage replication IBM Spectrum Virtualize for Public Cloud provides a new storage-based replication services for hybrid cloud solution to combine on-premises and cloud storage for greater flexibility at lower cost across a range of RPO/RTO targets. IBM Spectrum Virtualize for Public Cloud only enables data replication and migration between local storage and the IBM Cloud, and between IBM Cloud data centres. Integration with VMware plug-in like Site Recovery Manager, this hybrid cloud solution uses a fully tested IBM Storwize Site Recovery Adapter (SRA) to deliver business continuity across a wide range of failures. It further brings flexibility to an organization by enabling migration of virtual infrastructure workloads between data centre and cloud.
With data in IBM Cloud new age use cases like using data for Dev-Ops, reporting and analytics clients are looking for quick access to data copies and automated way of managing and maintaining the copies of data. IBM Spectrum Copy data management provides data re-use capabilities as a part of Multi Cloud Business continuity and data re-use solutions. IBM Spectrum CDM is used directly on the IBM Spectrum Virtualize for Public Cloud volumes through a set of integrated APIs.
IBM Spectrum Copy data management can be used for
Automatic and orchestrated way for creation and use of snapshots, simplify management of data copies by efficiently maintaining the versions of data copies residing in storage front ended by IBM Spectrum Virtualize for Public Cloud. It can be used to leverage high value use-case such as automated disaster recovery across cloud service providers.
Want to know more on the solution ???
Visit us @ IBM Technical University , Hollywood Florida USA from 15 -19 Oct 2018 & Rome, Italy from 22 - 26 Oct 2018
Additional resource and detailed Blueprint how to configure the solution : https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=65016865USEN&
Blog Author : Hemanand Gadgil
Modified on by MandarVaidya
If you have virtualized your datacentre server resources on a VMware platform, it becomes important for other partnering resources such as storage or network to align with VMware virtualization technology. For example, if you want to use storage for your VMware virtualized servers, it is important that your storage supports VMware APIs such as vStorage API for Array Integration (VAAI), vSphere Storage API for Storage Awareness (VASA) to take advantage of VMware integration with storage resources. The users who have deployed their VMware virtual infrastructure using external storage arrays require features that enable efficient utilization of storage capacity. VMware introduced the SCSI UNMAP primitive in vSphere 5.0 to address this requirement.
IBM has long standing technical partnership with VMware to integrate each other’s technology for effective consumption of mutual resources benefiting end-users and customers. Along with other IBM storage offerings such as IBM Spectrum Virtualize storage family and IBM Spectrum Accelerate storage family, IBM DS8000 storage family supports VMware’s integration points such as VAAI and VASA.
What is VMware VAAI SCSI UNMAP?
It’s not new. Many of the VMware users know about this, still for those who are unaware about VAAI SCSI UNMAP -
If you are using storage vMotion or vSphere snapshot consolidation/deletion or Virtual machine deletion on a thin provisioned LUN from external storage array, then you always wondered why space is not released from the storage array. Prior to vSphere 5.0, space released from vSphere is never returned to the storage array for another LUN creation of for another storage host. It was not effective way of storage consumption in a VMware environment. With the introduction of VAAI SCSI UNMAP primitive, space released from vSphere on a thinly provisioned LUN is returned to the storage array. This feature is designed to effectively reclaim deleted space to meet continuing storage needs.
VAAI SCSI UNMAP on vSphere 5.0 is long back. Why writing it now? It’s simple , it’s now supported on IBM DS8880.
IBM DS8880 now supports VMware SCSI UNMAP primitive. With this, storage space is returned to IBM DS8880 for another use when space is released from vSphere layer using VAAI SCSI UNMAP. With this, the VMware host notifies the storage device about freed space on a thin-provisioned LUN by sending a SCSI UNMAP command. DS8880 will release any entire extents that have been allocated within that freed space. On DS8880, the extent size is either 16MB for small extents or 1 GB for large extents. Therefore, for any space to be released, the request must be for at least 16 MB. If the request is not aligned on extent boundaries, then only full extents contained within the requested range will be released.
What are the requirements of using VAAI SCSI UNMAP with DS8K?
- VAAI SCSI UNMAP feature is available with DS8880 with the latest release 8.2.3 (GA: 9th June, 2017).
- ESXi host version 5.0 or higher.
- Thin provisioned volumes on DS8880 with 16 MB extents
How do I use VAAI SCSI UNMAP with DS8880?
IBM DS8880 storage supports VAAI. Verify it from vCenter by browsing to the VMware datastore created on a DS8880 LUN. ‘Hardware Acceleration’ status as ‘Supported’ confirms that storage array supports VAAI.
After verifying DS8880 supports VAAI, now find out whether DS8880 supports SCSI UNMAP primitive. You need to use ESXi server command line interface for that.
- First find out NAA id of DS8880 LUN backing VMware datastore.
[root@esx:~] esxcli storage core device list
Display Name: IBM Fibre Channel Disk (naa.6005076308ffc54c0000000000001100)
Has Settable Display Name: true
Device Type: Direct-Access
- From this NAA ID, display the device specific details for VAAI SCSI UNMAP primitive supported by DS8880 for that device. It will tell us whether the underlying DS8880 supports SCSI UNMAP primitive for dead space reclamation.
[root@esx:~] esxcli storage core device vaai status get -d naa.6005076308ffc54c0000000000001100
VAAI Plugin Name:
ATS Status: supported
Clone Status: unsupported
Zero Status: supported
Delete Status: supported
The “Delete Status: supported” specifies that it can send SCSI UNMAP commands to the underlying DS8880 storage when space reclamation is requested.
OK!! VAAI SCSI UNMAP is supported on DS8880. Now, how do I release the space back to DS8880?
Assuming you are hosting a virtual machine on a datastore created from a thinly provisioned DS8880 LUN.
- Before issuing SCSI UNMAP, check the volume details and used volume capacity on a DS8880 volume.
dscli> showfbvol 1105
Date/Time: June 2, 2017 11:38:15 AM MST IBM DSCLI Version: 18.104.22.168 DS: IBM.2107-75LR811
datatype FB 512
cap (MiB) 409600
cap (2^30B) 400.0
cap (10^9B) -
cap (blocks) 838860800
reqcap (blocks) 838860800
realcap (MiB) 88944 ⇒Capacity before performing storage vMotion
migratingcap (MiB) 0
- After performing an operation on host that frees up the space (such as storage vMotion), run SCSI UNMAP command from ESXi host to the DS8880 storage to notify it to release the free space that was allocated for this LUN.
[root@esx:~] esxcli storage vmfs unmap -l D2
The "-l" option identifies the volume by the volume label.
For more details on the esxcli storage vmfs unmap command, see https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513
- Now check the space on the DS8880 volume and verify that it has released the storage space from DS8880 volume.
dscli> showfbvol 1105
Date/Time: June 2, 2017 11:39:00 AM MST IBM DSCLI Version: 22.214.171.124 DS: IBM.2107-75LR811
datatype FB 512
cap (MiB) 409600
cap (2^30B) 400.0
cap (10^9B) -
cap (blocks) 838860800
reqcap (blocks) 838860800
realcap (MiB) 79536 ⇒Capacity after performing VAAI SCSI UNMAP
migratingcap (MiB) 0
In summary, VMware VAAI SCSI UNMAP extends the usefulness of thin provisioning at the array level by maintaining storage efficiency throughout the life cycle of the vSphere environment. Now with IBM DS8880 storage supporting VAAI SCSI UNMAP primitive, it can reclaim the space released by vSphere and helps maintaining storage efficiency for the VMware deployments.
For more information refer IBM DS8880 product page https://www.ibm.com/systems/storage/hybrid-storage/ds8000
Modified on by Shashank Shingornikar
Dockerizing Oracle Database
Docker is a next buzzword on the net now. While a lot of work has been done in dockerizing various applications under docker softwares like Oracle still poses challenges during installation / configuration / execution. This blog entry gives the users a flavour in integrating Oracle Database for running in Dockerized environment.
Are you sure this can be done ?
Yes. Not only can be done, when its available you'll hardly notice any difference as compared to instance / database available on a bare metal / VM.
Should this be done ?
For running Oracle Database production Instances we suggest NOT. Oracle will not support this officially.
So, what are the ideal use case ?
The use cases for this could be, creating an environment for testing PSU or CPU patch provided by Oracle or think of it as an environment for your developers who has a specific requirement or even when you want to create a training environment. Even your experienced DBA's will find it handy as a sandbox for personal testing.
What does the flow looks like ?
In the POC environment we made this a multi step process for better understanding and granular control over the process. Here is the basic flow.
Platform Image (PI) is the base image on which customization's are done using Dockerfile. PI is read only in nature hence in order to customize it, Docker creates an intermediate container which holds only the changes made to PI. These intermediate containers when saved they'll save the state in the form of read-only images. The process continues till a ORACLE_HOME image is created containing Oracle binaries.
Database instances are spanned using this image. In order to create / save the database on a persistent storage, a data only container can be created. The data container together with container based on ORACLE_HOME will be used to create database instance and database underneath.
Here is how the final picture will look like. In this environment two 12c docarized database instances are running. The CDB database looks like the as shown in the picture above. In order to create / clone / plug in the PDB's a NFS mount point is mapped from Storwize V7000 system on Docker host.
What do you say are the advantages of this ?
There are several advantages of this. Although there are multiple images, each of this image being read only gives a consistent starting point every time a container is spawned on the image. Each container has its own namespace isolation consisting of PID, Network, Control groups (CGROUPS). Thus each container when running behaves like a independent host in-spite of the fact that it is based on same image. The images can be saved or pushed to an on premise repository thus making them readily available to another docker host. The images can be moved across various platforms such as servers across a network, laptop or even on public cloud space such as Amazon Web Services.
Modified on by SandeepZende
With the surge of smartphones and tablets is also the surge of applications available for storing and sharing data. Employees of many organization tend to use the applications of their choice which are comfortable to them to store and share data. This puts organizations at risk of data security and puts pressure on them to think of having an official application which will allows its user to store and share data securely and also have the control of its data.
To gain the control over data, organizations need a very robust, reliable and easy to integrate storage devices for its file sharing applications. Along with security and control over its data, organizations are looking for additional services or features which can enhance the productivity of its employees.
Citrix offers ShareFile—an enterprise-class, IT-managed, secure file sync and sharing service. ShareFile offers IT the ability to control sensitive corporate data while meeting the mobility and collaboration needs of users and the data security requirements of the enterprise.
Citrix provides multiple options to store the data be it on premise, in the cloud or a combination of both to meet the needs for data sovereignty, compliance, performance and costs. For organizations that require increased data protection, ShareFile offers customers the ability to encrypt data with their own encryption keys.
IBM has more than one option to take care of the on-premise storage solution and one being the answer to a highly scalable file sync and share solution and that is IBM Spectrum Scale. The other storage option is IBM Storwize V7000 Unified which provides a unique combination of file as well as block storage option to small and medium level file sync and share solutions.
ShareFile extends an organizations’ data strategy to include existing network file drives, SharePoint and OneDrive for Business - allowing a single point of access to all data sources. StorageZone Connectors makes it easy to securely access documents which otherwise cannot be accessed outside of corporate networks or on mobile devices. Access any enterprise content management (ECM) system with StorageZone Connectors SDK, expanding the types of data users can access and edit on-the-go via ShareFile.
Advanced security features including remote wipe, device lock, passcode protection, white/black listings and data expiration policies allow you to determine exactly how sensitive data is stored, accessed and shared. Track and log activity in real-time and create custom reports to meet compliance requirements.
While IBM Spectrum Scale brings the scalability and performance with it, it also can add the value by using the features below:
- File encryption and secure erase
- Transparent flash cache
- Network performance monitoring
- Active File Management (AFM) parallel data transfers
- NFS version 4 support and data migration
- Backup and restore improvements
- File Placement Optimizer (FPO) enhancements.
Other features of IBM Storwize V7000 Unified:
- IBM Storage Mobile Dashboard
- Dynamic Migration
- IBM Easy Tier
- Thin provisioning
- Flash drives
- Active File Management (AFM) parallel data transfers
- IBM HyperSwap
- IBM Real-time Compression
- Encryption for virtualized storage
Below is the high level flow diagram of the solution using IBM Spectrum Scale
For the high level overview of Citrix ShareFile and IBM Storage Systems, follow the link below:
For more information of a solution with IBM Spectrum Scale, follow the link below:
For more information of a solution with IBM Storwize V7000 Unified, follow the link below:
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by EricJohnson
For information technology (IT) customers looking to control site expansion costs, Microsoft offers their Azure cloud services. To appeal to larger customers with existing disaster recovery (DR) models that span multiple sites and use SAN solutions, Microsoft recently added Azure Site Recovery to their cloud services mix. This allows Microsoft to target a full spectrum of potential customers for their cloud services. For small customers that cannot afford the costs (both CAPEX and OPEX) associated with additional sites, traditional Azure services meet their DR requirements. However, in order to increase further Azure business revenue, Microsoft realized they needed to attract more large businesses with existing SAN infrastructures by appealing to cost-conscious CIOs facing common IT budget constraints. In essence, Microsoft cloud services continues to appeal to smaller customers who can not add data centers or sites and larger customers who wish to control site sprawl. Why bother with the cost and management headaches associated with maintaining additional sites for disaster recovery to meet customer service level agreements by protecting business critical data and services, when you can let Microsoft protect them for you and save your money and sanity for other high priority business needs? That is where Microsoft Azure and ASR services come into play.
Heralded as Microsoft’s cloud computing platform, Azure provides a simple, reliable, and extensible web-based interface or front end that is tightly integrated with a Microsoft System Center VMM and SQL Server back end. While the overall Azure model is multi-tiered, think of it, more or less, as a web management portal that uses Internet Information Services (IIS) at its foundation with VMM as the engine that drives its cloud tasks. VMM in turn, stores all of the cloud configuration and environment data in a SQL Server database. Of course, the Azure cloud itself is based on the Microsoft System Center application suite and consists of a Microsoft global network of secure data centers that offer compute, storage, network, and application resources to help protect your data and offset the high availability and administrative costs of building and managing additional sites. Even though Azure has multiple tiers, the storage array aspects of Azure Site Recovery using SAN replication for on-premises clouds and how replication differs from traditional Hyper-V replica implementations is the primary focus of this blog.
The Hyper-V replica feature is designed to protect VMs hosted by different servers using a built-in replication mechanism at the VM level. A primary site VM can asynchronously replicate to a designated replica site using an Ethernet network infrastructure including local area networks (LAN) or wide area networks (WAN). The designated replica remains offline in a stand-by state pending planned or unplanned VM failovers. After the initial VM copy is replicated to the secondary site, asynchronous replication occurs for only the primary VM changes. This network-based replication does not require shared storage or specific storage hardware and Hyper-V replicas can be established between stand-alone or highly available (HA) VMs, or a combination of both. The Hyper-V servers can be geographically dispersed and the VMs are not even required to belong to a domain. Thus, the Hyper-V replica requirements are rather basic and easy to implement yet are restricted to asynchronous network replication only.
Until recently, Azure could only leverage Hyper-V replicas using a network replication channel but now can use SAN replication between two on-premises VMM sites or clouds. With the addition of a Hyper-V replica SAN replication channel, synchronous replication can be used to eliminate asynchronous lag times and multiple VM consistency is possible using Azure Site Recovery. However, it is important to realize that asynchronous SAN replication behavior is similar to asynchronous Hyper-V network replication because after the initial VM copy is replicated to the secondary site, asynchronous SAN replication occurs for only the primary VM changes. However, if using IBM Real-time Compression, performance gains are also realized because less data is required to replicate over the SAN. No matter the storage options such as compression, with just a few clicks in the Azure Site Recovery management portal, simple orchestration of IBM XIV replication and disaster recovery for Microsoft Hyper-V environments can be automated in the form of planned and unplanned site failovers. In a practical sense, this collection of Azure SAN replication enhancements and disaster recovery functionality is an extension of past Microsoft System Center VMM storage automation features.
So with the introduction of Microsoft ASR cloud services, larger customers have the option to provide DR for their private clouds using IBM XIV SAN replication but they also can take advantage of hybrid cloud protection by subscribing to Microsoft Azure services. This services model gives customers the opportunity to protect their existing data center and SAN infrastructure investments while enticing them to purchase additional Microsoft Azure cloud services. Refer to Figure 1 below for a general Microsoft cloud layout that uses ASR with IBM XIV SAN replication:
Figure 1: Microsoft Azure Site Recovery using IBM XIV SAN replication general lab configuration
For further information about Microsoft ASR using IBM XIV Storage System Gen3, including step-by-step configuration processes, please refer to the following white paper:
Modified on by Shashank Shingornikar
After successfully implementing Real-time Compression feature in Storwize V7000, IBM has taken a step further bringing this patented technology in IBM XIV storage system. In a recent announcement of XIV 11.6.0 release, the Real-time Compression feature is seamlessly integrated in XIV storage system. Eliminating the need to add any extra hardware, the IBM Random Access Compression Engine (RACE) technology is now integrated with XIV storage system software stack to compress data before writing it to disk (above cache mechanism) resulting in up to 80% storage capacity savings.
It is designed with transparency in mind so that it can be implemented without changes to applications, hosts, networks, fabrics, or external storage systems. The solution is not visible to hosts, thus users and applications continue to work as is. To estimate the compression savings on an existing XIV non compressed volumes, Comprestimator utility is now integrated with XIV software.
What does Compression has in store for me ?
On the XIV system the compression ratio for all uncompressed volumes in the system is continuously estimated, even before enabling compression. The figure shows the various stages of volumes on the system ranging from uncompressed to potential savings and finally the total amount of compression on a volume
What are the Compression benefits for XIV ?
With the inline implementation of Real-time Compression the IBM XIV now delivers dramatic cost savings without need for extra hardware and provides following benefits :
- Increases usable capacity per rack typically to one Petabyte or more with Real-time Compression, greatly reducing effective cost per capacity
- Replicates compressed data faster and using less bandwidth, freeing up bandwidth for other uses
- Continuously displays predicted or actual compression ratios for all volumes
- Converts non-compressed volumes to compressed non-disruptively
So How does it work ?
Real-time compression implementation in XIV storage uses above cache architecture where data is compressed or de-compressed between the I/O interface and the cache. The compression node runs on every module of XIV taking advantage of parallel architecture of XIV. It compresses the portion of volume which only belongs to the module and thus distributing compression workload across all the modules of XIV. Hence, Real-time Compression implementation in XIV have minimal impact on the performance delivered by XIV.
Whenever write operations happen, data is compressed before they enter cache and acknowledgment is sent back to the host. During read operations, reads are stored compressed in cache and data is de-compressed when they are read from cache using RACE before passing it to the host. During XIV mirroring operation, data is compressed only once and compressed data is sent across the network reducing network bandwidth.
What will benefit more from Compression ?
- Database environments – DB2, Oracle, MS-SQL, and so on
- Database Applications – SAP, Oracle applications, and so on
- Server/Desktop Virtualization – KVM, VMware, Hyper-V, and so on
- Other compressible workloads – seismic, engineering, and so on
- Email – Microsoft Exchange, and so on
Are there any guidelines for Compression?
- IBM Real-time Compression is appropriate for data that has the following characteristics:
- Any data for which the Comprestimator tool estimates 25% or higher savings
- Volumes that contain data that is not already compressed (for example, un-compressed image and video files)
- Data for which application based encryption is not used or data that is not sent encrypted to the XIV.
Anything I can refer to ?
Real-time Compression not only works best with randomly accessed data such as database like IBM DB2, Oracle, MS-SQL Server but it also provides good results with server virtualization solutions like VMware, KVM, Hyper-V. When using Oracle databases, compressed volumes take advantage of above cache architecture compressing the writes seamlessly. A 57% compression has been observed during creation of a terabyte of data with minimal performance penalty. ( Publication : WP102551 )
VMware vSphere virtual machines can be seamlessly deployed on the compressed volumes, often with the compression savings of 50% to75%, allowing customers to reduce the storage capacity required for vitalized environments. ( Publication : WP102552 )
Microsoft Hyper-V virtualization helps customers maximize System x server and other resource use. Included in Windows Server, Microsoft Hyper-V virtualization helps reduce costs by allowing a greater number of application workloads to be hosted on fewer physical servers. When using Microsoft SQL Server 2012 SP1 OLTP data files and VM Windows Server 2012 R2 system files stored in Hyper-V virtual disk and XIV compressed volume achieved 73% compression savings ( Publication : WP102553 )
What about the performance ?
While the team tested the compression benefits and compiled the paper, another team from IBM Tel Aviv lab, had been busy with performance testing of the Oracle database hosted on the IBM XIV compressed volumes.
In the test setup, the team used, both compressed and uncompressed volumes configured on XIV for better parallelism. These volumes were mapped to the ESX system hosting the database server to create multiple VMFS file systems. A 5 TB database was created on the VMFS volumes using the Benchmark Factory tool. During the test run of 12 hour, load starting with 1,000 to a maximum of 30,000 users was made to put the system under a realistic production load. The I/O per second (IOPS) and response time information shown by the Benchmark Factory tool is shown by Figure below. Each point on the graph indicates an addition of 2500 users. The graph clearly indicates that the application has minimum impact in terms of response time when using the compressed volumes.
Blog Authors: Mandar Vaidya, Shashank Shingornikar
Modified on by SandeepZende
IBM Storage at your service courtesy of IBM Spectrum Control and VMware vRealize Automation
In today's emerging or, I would say, stabilizing world of IT cloud, everything needs to be delivered in "as a service" fashion. Because of this there is a growing demand for any IT solution to be available as a service. Organizations are thinking creatively to come up with the new IT solution as a service and then there are organizations who are developing cloud platforms which help other organizations quickly deploy their cloud solution. And this is where the race has begun.
There are end users or the business opportunists who do not want to waste time in designing and implementing a solution and then selling it. They want to start their business in a very short time. They are ready to rely on the organizations who provide this type of platforms to make their solution available in a short time. The platforms would help in building either public cloud or private cloud or both i.e. hybrid cloud depending of their need.
There are various vendors in the market who themselves provide the whole platform and also integrate with other vendors to build a unique platform which will help build the cloud. In today's world, it is very necessary that if a product is being developed, it has to take into account of how it can be integrated into the cloud or how it can enable cloud platform.
There are various enablers of cloud ranging from top layer to the bottommost layer of the cloud solution. IBM Spectrum Control is one of those enablers which provide the efficient infrastructure management for the cloud, virtualized and software defined storage. It simplifies and automates storage provisioning, capacity management, availability monitoring and reporting.
IBM Spectrum Control will also be an important factor in the success of IBM Spectrum Storage offerings by providing a control plane that is capable of provisioning and monitoring of storage in cloud or on-premise with the control of defining it in the software. The software defined storage characteristics of IBM Spectrum Control allows itself of receiving the storage definitions from the top layer with the help of interface made available for cloud providers or enablers. The interfaces are made available in the form of plug-ins developed for a specific cloud vendor. For example, IBM Spectrum Control Base Edition provides plug-ins for VMware vRealize Orchestrator, VMware VASA, VMware vRealize Operations, VMware vRealize Automation, etc.
With the help of IBM Spectrum Control Base Edition and VMware vRealize suite, a cloud architect can design various cloud solutions and can deliver these various solutions in a "as a service" fashion. One such very useful solution for cloud environments is "Storage as a service" using IBM storage. In this solution, an architect can design a service using VMware vRealize Automation, VMware vRealize Orchestrator, IBM Spectrum Control Base Edition and IBM XIV storage system and make storage available as a service wherein an end user, if entitled to, can avail the service by requesting a storage space for its VMs.
VMware vRealize Automation with its Advanced Services can deliver almost anything as a service (XaaS). The Advanced Services of vRealize Automation allows a cloud architect/administrator to advertise vRealize Orchestrator workflows as a service. Whatever workflows that are designed in the vRealize Orchestrator can be exposed from vRealize Automation. The 'IBM Storage plug-in for vRealize Orchestrator' which is a component of IBM Spectrum Control allows vRealize Automation to define or provision the storage as per the administrator or user need.
For more details on how "storage as a service" can be implemented using IBM Storage, IBM Spectrum Control and VMware vRealize, refer the technical paper.
Also refer the recorded demos below :
Demo: IBM Spectrum Control & VMware vRealize Automation - Configuration
The video demonstrates the configuration flow of the integration of IBM Spectrum Control Base Edition, IBM XIV, VMware vRealize Orchestrator and VMware vRealize Automation to enable a 'Storage as a service' solution. This video also demonstrates the creation of volume, mapping a volume and creation of datastore from the vRealize Automation web console.
Demo: IBM Spectrum Control & VMware vRealize Automation - Datastore Creation
This short video demonstrates the creation of datastore upon a user request from VMware vRealize Automation and IBM Spectrum Control Base Edition playing a part to seamlessly create a volume in storage for datastore.
For more information: https://www.ibm.com/systems/storage
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by SandeepZende
Healthcare industry is striving to become a top-class service industry like any other service industry. More it has to do with the life of the people that makes it even more important and critical.
As per the current trends, hospitals are merging to provide a world class healthcare service in an affordable price to its patients. The healthcare industry is evolving day by day accompanied with the technology innovations to provide its patients the fastest and the most efficient service. Enterprise software, mobiles, network, processors, storage, etc. all are in the race to contribute to this innovation or upliftment of the healthcare industry.
Here Epic is the front runner in understanding most of the needs of the healthcare industry and has developed a sophisticated suite of the software products that will serve patients better. It has a large suite of products for managing patient, clinical, laboratory, billing, etc. Epic is well known for it’s EHR software and is the industry leader from a long time. Being at top is not easy as they have a very stringent software design and requirements that the ecosystem around it needs to meet. Epic uses InterSystems Caché database for its operational database and it has a requirement of a very high response time from the storage. So whatever the numbers of IOs they should be completed within the stipulated time set by Epic.
In the past few years, the EHR solutions has moved from using SAS drives to SDDs using Flash but using SCSI protocol. Now the storage industry is taking a technology shift for the good. Storage industry is slowly but surely has started adopting the new industry standard i.e. NVMe. NVMe has already begun to make the noise and will eventually replace SCSI for flash based solutions. Most of the storage industry has already started providing solutions around it and IBM too has announced IBM FlashSystem 9100 using its own patented FlashCore Modules having NVMe interfaces as well as support for industry-standard NVMe drives from the other vendors.
IBM Spectrum Virtualize software (popularly known as IBM SAN Volume Controller) is the heart of IBM FlashSystem 9100 capitalizing all the rich and latest hardware technologies that IBM FlashSystem 9100 has. IBM Spectrum Virtualize brings along all the advanced, well known and proven functions like Easy Tier, Remote Mirroring, FlashCopy, etc. with IBM FlashSystem 9100 making it more sophisticated and advanced storage platform.
So how does it make a difference to Epic EHR solution? It does a lot. Firstly, with NVMe interface the performance gets boost. Secondly with IBM FlashCore Modules, lot of storage capacity can be saved using compression that does not have any impact on the application performance. Below is the screenshot of the dashboard of the IBM FlashSystem 9100 demonstrating just a glimpse of the compression and savings when 60TB of Epic’s operational database was created and fully allocated FlashCopy mappings were created.
For more information on Epic EHR solution with IBM FlashSystem 9100, refer this link.
Disclaimer: Above are my personal thoughts and not necessarily of my employer.