Modified on by SandeepZende
Medical Imaging is one of the top developments that is changing the way the medical field is looked upon. For example, CT and MRI technology is the one of the most significant medical innovation that is feeding the medical imaging. The advances like these are letting medical practitioners or researchers solve the mystery of various diseases and enable precise treatments. With these advances, the influence of medical imaging on healthcare industry is growing. The usage of medical imaging is growing beyond diagnostics by entering the areas of prevention, research and therapy.
With that, there have been a surge in different modalities which enable capture medical images. The medical images are in the form of DICOM files. With the various options available for producing medical images, lot of medical imaging data is being generated which needs to be stored for subsequent treatment or consultation or research. Some data may be of immediate use or may be used in the future if the disease reoccurs.
There are few vendors in the market who provide archiving of medical images and GE is one of the prominent one. With GE Healthcare Centricity Enterprise Archive one can save or archive the medical image files on IBM Spectrum Scale running on IBM Elastic Storage Server. IBM Spectrum Scale can be configured with IBM Transparent Cloud Tiering which enables to move older or less used files from IBM Spectrum Scale to IBM Cloud Object Storage or any other cloud access supported by IBM Transparent Cloud Tiering. The same medical images or files can be seamlessly restored from the cloud using this solution.
The diagram below is the architecture of the whole solution.
In this solution, GE Centricity Clinical Archive saves the medical imaging data on IBM Spectrum Scale running on IBM Elastic Storage Server and the same data is then transferred in the form of objects using IBM Transparent Cloud Tiering running in IBM Spectrum Scale to IBM Cloud Object Storage. The policies can be written in IBM Spectrum to define which all medical images that need to be transferred to and from cloud.
For complete information on the solution, you can refer this paper.
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by Prashant Avashia
Most Healthcare Systems are challenged to deliver the highest quality of services to most patient populations at the lowest cost. These challenges are compounded by significant increase in the number and variety of clinical tests, diagnostics, and investigative procedures. Chronic diseases on aging populations, unsustainable demand for healthcare services, and significant increases in cost of these services is causing a huge data growth in Patient Studies. The aggressive data growth in Patient Studies implies that most healthcare institutions need to leverage new ways of thinking, analyzing and processing data. As an estimate, today, if a healthcare institution served over 4 million patients annually, this number will only double in about 4 years, with a conservative growth of 25% YoY.
IBM Spectrum Scale offers a very compelling proposition to bring efficiency in clinical and software services. It manages patient information at virtually limitless scale and mines that data for new insights to improve patient outcomes. IBM Spectrum Scale is a software-defined solution for managing structured and unstructured data with security, reliability, and high performance. It provides system scalability, very high availability, and reliability with no single point of failure for large-scale data stores. IBM Spectrum Scale delivers a roadmap of taking a client (hospital) from isolated silos of Departmental PACS Solutions, to a common data lake/repository. Leveraging built-in policies, a user can move data from a short-term archive to object based data stores by leveraging IBM Cloud Object Storage. This solution built with Spectrum Scale, Transparent Cloud Tier, and IBM Cloud Object Storage provides a secure foundation to access any data residing anywhere, from anywhere, facilitates a closer collaboration of clinical practice and clinical research by simultaneous reuse of the same data, and also it provides for easy data management because built-in software intelligence features to move the data to optimum tiers of storage, based on performance and cost criteria.
This published whitepaper provides recommendations and best practices to help ensure an efficient installation of this solution to use policies to move any medical imaging data from Spectrum Scale to IBM Cloud Object Storage, with acceptable performance.
If you are at the IBM Interconnect Conference, today, please stop by, this afternoon, to learn more about how we built a proven & well-tested solution for Medical Information (IBM Storage with Cloud Object Storage: A Resilient Solution for Medical Information; Monday, 1:00 PM - 1:45 PM | Lagoon L | Session ID: 6987A). See you there!
For those that missed this event, please access the presentation at the slideshare location: https://www.slideshare.net/IBMSystemsISVs/resilient-solution-for-healthcare-data-ibm-spectrum-scale-and-ibm-cloud-object-storage
As the world moves to the new standard of object storage, many applications are still living in a legacy world and do not have the capability to utilize API constructs. In the past the only solution was to attach the server to a gateway which would do the conversion, and possibly require specific tiering software to allow access to the content once it was stored. IBM has solved this for windows clients by partnering with one of IBM's ISV partners Tiger Technology.
Their software product which runs on most window's servers will make the conversion to object format, and actually do policy tiering so that tiny files or metadata files which aren't best stored in the cloud remain locally. This provided great archive capability and performance for applications not designed with cloud storage in mind. Later in April coinciding wit the NAB show. The Tiger Bridge software will also include tiering to LTFS for a very inexpensive deep archive tier. Look for more information at both ISC West early next month, and NAB at the end of April
Modified on by HemanandGadgil
Quick access to the copies of your data is challenging in traditional environment. New age use-cases such as cloud, Dev-Ops, analytics and reporting rely on quick access to data copies. Without automation in place for quick and reliable access to data copies, it can create severe impact to the operational efficiency for your business.
IBM Spectrum Copy Data Management used in conjunction with IBM Spectrum storage enables critical use-cases by providing in-place copy data management to modernize processes within existing infrastructure.
IBM Spectrum Copy Data Management with its ability to be deployed in 15 minutes inside a virtual machine can catalog the existing environment such as IBM Spectrum storage, VMware environment and applications such as Oracle or Microsoft SQL.
Following diagram shows the orchestrated copy data management of applications hosted on VMware virtualized environment and IBM Spectrum Storage.
It leverages native Global Mirror and FlashCopy of IBM Spectrum Virtualize to create snapshots, clones and replication.
IBM Spectrum Copy Data Management offers following benefits with IBM Spectrum Virtualize
1) Automatic creation and use of snapshots, replicas in existing IBM Spectrum Virtualize systems to ensure application consistency.
2) Simplify management of data copies by efficiently maintaining the versions of data copies residing on IBM Spectrum Virtualize systems.
3) It can be used to leverage high value use-case such as automated disaster recovery across cloud service providers.
4) With capability of integrating application centric VMware and Spectrum Virtualize systems together, it can cater to the modern use case of utilizing data copies in a Dev-Ops environment.
Workflow for IBM Spectrum Copy data management
To view the configuration steps of the demo and know more about integration of IBM Spectrum Copy Data Management with IBM Spectrum Storage and VMware visit
YouTube link : https://youtu.be/OwfUjclfKVc
Technical White paper : https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=TSW03547USEN
Modified on by HemanandGadgil
Software-defined storage (SDS) is a key component for clients adapting to the modern data center and enable hybrid clouds. By decoupling the storage hardware and software that manages it, SDS empowers the clients to not only maintain the existing heterogeneous storage hardware but also simplifies the management by virtualizing the underlying storage hardware. Moreover, clients can also avail the advantages for data replication and seamless migration between heterogeneous storage platforms.
Disaster Recovery as a Service using IBM Spectrum Virtualize
The solution is built on IBM Spectrum Virtualize software running on Intel x86 processor-based servers at Recovery Site and IBM Storwize V7000 at Protected site (Production site). It leverages VMware Site Recovery Manager (SRM) to replicate between IBM Storwize V7000 and IBM Spectrum Virtualize. The diagram below shows the architectural overview of the environment.
Where can it be deployed
- Cloud and managed service providers looking to implement DRaaS to users with heterogeneous or dissimilar storage infrastructures
- Client looking to reduce capital expenditure (CAPEX) and operational cost for DR by using Software defined storage based approach
- Client looking to integrate with cloud orchestration and interoperate with on-premises existing storage system
- Organization looking for optimizing there existing heterogenous storage infrastructure with centralized storage management tool
Look for more resources here.
Disaster recovery as a service using IBM Spectrum Virtualize and VMware Site Recovery Manager integration
YouTube url: https://youtu.be/Gc3oaBkQbR4
Modified on by SanjaySudam
As the use of camera security surveillance system grows due to increased security concerns, so the demand for the better image quality. Also the demand for the longer video retention times are growing due to the legal compliance. The increase in data produced due to higher image qualities and the longer retention means more data to be stored and archived at the storage level.
IBM Storage options and solutions:
With the more data generated from the higher resolution cameras and longer retentions, storage system plays an important role in the video surveillance architecture. The storage system in the architecture is as critical as the cameras in the surveillance infrastructure. Different types of the storage systems can be selected in the same architecture based on the video data life cycle requirement.
Some of the key following points needs to be consider before selecting the storage system
- Majority of the video surveillance data is never reviewed. Video surveillance systems are write intensive and storage should be geared enough to handle the continuous writes.
- Seamless data movement across various storage tiers like SAS, NL-SAS, Tape system and Object storage throughout the life cycle of the video content.
- How easily is your solution scalable based on the future requirements?
To address the ever increasing and challenging storage requirements of the video surveillance systems, IBM offers a validated storage solutions with the Industry leading solutions from the Genetec and Milestone systems. IBM team has performed extensive testing and provides a range of options for the medium to very large enterprise solutions based on your requirements.
IBM Storwize platform:
IBM Storwize systems provide an easy to use solutions for the medium to enterprise workloads. This platform has been tested with the Genetec and Milestone systems and provides a very cost effective solutions for the medium to large customers.
IBM team has performed testing of the Genetec Security Center and Milestone XProtect Corporate suite of products with the IBM Storwize in IBM lab and published the validated architecture for the surveillance system.
Figure 1: DVS high level architecture with the IBM Storwize system.
IBM Storwize system was connected to the multiple Video Management and Surveillance systems with the 16Gbps Fiber Channel connectivity. Single Storwize system can accommodate up to 1200 cameras depending on the camera resolution and retention period of the content.
For detailed information about the genetic reference architecture, please refer the solution document https://public.dhe.ibm.com/common/ssi/ecm/ts/en/tsw03523usen/TSW03523USEN.PDF
For detailed information about Milestone reference architecture, please refer the solution document
Spectrum Scale and Elastic Storage Server:
Elastic Storage Server powered by the Spectrum Scale provides a highly scalable solutions required for the very large surveillance customers like Airports, Metro city surveillance.
Spectrum scale and Elastic Storage Server based solutions were tested with the Genetec Security center and Milestone XProtect Corporate. Spectrum scale based solution provides a single name space based hierarchal storage options to store the content on the various storage tiers based on the data life cycle of the video content. It moves the data transparently between various storage tiers without impacting the video surveillance system.
Spectrum Scale based solution has been integrated with the Spectrum Archive Enterprise Edition to move the data seamless to the tape storage unit and reduce the overall Total Cost of the solution.
Figure 2: DVS high level architecture with Spectrum scale and ESS
For detailed information about the Milestone XProtect Corporate, Spectrum and Elastic Storage Server, please refer the below solution document
For detailed information about the Genetec Security Center with the Elastic Storage Server, Please refer the below solution documents
Modified on by EricJohnson
With the introduction of IBM Spectrum Virtualize software 7.7, the IBM Storwize product family now supports on-premises Microsoft Azure Site Recovery (ASR) using a Hyper-V replica Storage Area Network (SAN) replication channel. Rather than share similar content as my previous blog about IBM XIV Microsoft ASR at https://www.ibm.com/developerworks/community/blogs/bb3d5479-8e6c-45dc-9cc3-d46716d3a749/entry/Failover_Microsoft_cloud_site_within_minutes_using_ASR_with_IBM_XIV?lang=en, I thought I would share a few of the key test differences between the IBM XIV and IBM Storwize when implementing this Microsoft solution. Think of it more as a support blog that reveals a few workarounds to help expedite your Microsoft ASR cloud disaster recovery solution.
There are 3 primary differences that I noticed when testing this solution:
- In the Microsoft System Center 2012 R2 Virtual Machine Manager (VMM) console when adding a storage device, you may encounter an error ID 20909:
Could not retrieve a certificate from the 9.x.x.x server because of the error: The underlying connection was closed: An unexpected error occurred on a send.
Details: An existing connection was forcibly closed by the remote host (0x80072746)
Workaround: Use the PowerShell to add the IBM Storwize storage device:
$RunAsAcct = Get-SCRunAsAccount -Name "V5000RunAsAcct"
Add-SCStorageProvider -NetworkDeviceName "126.96.36.199" -TCPPort 5989 -Name "isvg25k1.kir.labs.ibm.com" -RunAsAccount $RunAsAcct
Note:In my example above, RunAsAcct was first created in the SCVMM console using a preferred naming convention. Also, -NetworkDeviceName is the IP Address of the Storwize management IP. - Name is the Fully Qualified Domain Name (FQDN) of the Storwize system or management IP.
- In the Azure management portal when adding a replication group to enable protection for larger volumes, you may encounter an error code 600. This is due to an ASR-defined timeout policy that is triggered after approximately 2 hours during the IBM Storwize remote copy mirroring or synchronization phase. Here are the ASR management portal job error details:
Job ID: 1592a88c-3245-4a63-b232-b5595999dbfb-2016-08-05 20:34:34Z ActivityId: 985f9a49-073a-4606-a9ee-3ff9ed27014f
Start Time: 8/5/2016 1:34:29 PM
Duration: 2 HOURS 2 MINUTES
Task execution has timed out while waiting for job to complete on VMM. (Error code: 600)
Possible causes: VMM might be overloaded.
Recommendation: Please retry the operation after sometime.
Workaround: After the associated job completes in the VMM console (also in the Storwize Storage Management web interface, you should see the IBM Storwize remote copy change to a consistent synchronized state), restart the ASR management portal job. You can also perform IBM Storwize Storage Management manual steps to create a 4+ TB volume at both sites and then define remote copy consistency groups and member volumes. Manually complete the remote copy synchronization and in the VMM console, create a primary site replication group that includes the 4+ TB remote copy volume(s). Afterwards, you should be able to use the ASR management portal to add a replication group to enable protection for the larger volumes.
Note: In my test environment, ASR job timeouts occurred for 4 TB or larger volumes. Your results may vary.
- Unlike the IBM XIV, compressed volumes are not defined at the pool level. IBM Storwize compressed volumes must be created or defined at the volume level. Unfortunately, VMM does not allow administrators to create an IBM Storwize compressed volume.
Workaround: Users can manually create compressed volumes using the IBM Storwize Storage Management interface. IBM recommends (and only supports) creating a Storwize pool that contains compressed volumes exclusively and then refresh or rescan the VMM storage array to detect new storage pools and volumes. In other words, pools should not mix compressed volumes with regular or thin provisioned volumes. After, the volumes can be assigned to any VMM host group where they can be used for cluster shared volumes (CSVs). At this stage, perform the manual IBM Storwize Storage Management remote copy steps for 4+ TB volumes in the step 2 Workaround above.
For detailed step-by-step processes and further information about how to enable multisite on-premises cloud protection using Microsoft Azure Site Recovery with IBM Storwize, refer to the following website:
Modified on by Shashank Shingornikar
Update : Oracle has confirmed server-side caching as a generic storage technology. Thus the use of server-side caching in the production environment does not have any certification requirements.
Within an application ecosystem there are always challenges and performance most of times tops the chart.
In spite of having a well-balanced storage system and fully tuned Oracle Database instance, with ever growing users there is always a need to get more out of system. So from the tuning point when the efforts start, the entire stack has to be looked. While application server, database host can be easily upgraded, and to some extent network too, but when it comes to storage things not always easy. Really? Well think again..
The solutions right is here. Starting with AIX Version 7.1 with Technology Level 4 Service Pack 5 (7100-04-05) and later, IBM Storage Systems support a new feature called server-side caching that uses SAN attached flash storage to improve read performance. IBM FlashSystems are best known for their microsecond latency. When it comes to performance, nothing can stand in the way.
The server-side caching is very well supported right from the OS level and it is also well integrated to make use of the power of IBM FlashSystems. By making the server-side caching available at operating system level is not only storage agnostic but also eliminates the need for data migration to newer storage system. Moreover, now that the FlashSystem is available as part of SAN infrastructure, it can be shared with multiple servers that require performance improvement from storage.
In the following sections, we take a deeper dive at this feature to understand the key components involved in building a solution targeted towards Oracle databases.
For more information about configuring various storage data caching modes, refer to the IBM Knowledge Center for AIX at: ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.osdevice/caching_configuring.htm
For more information about dedicated cache configuration refer to IBM Knowledge Center for AIX at: ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.osdevice/caching_dedicated_mode.htm
Benefits of server-side cache
During the lab testing, various cache sizes were configured on the database host. The workload was run with a constant number of virtual users to see the effect of different cache sizes. It was observed that the cache takes approximately 30 to 45 * minutes to warm up. During this warm-up period heavy write and read activity was seen since the data not found in cache must be written to cache (write operation to cache). At the same time the data found in cache is read (read operations to cache) and returned to the application.
Note: The warmup time may be different depending on the size of cache device.
Figure 1 shows the TPS data captured during a five hour run. The run was started without starting the server-side cache. We see that the application quickly attains a saturation level and the TPS at a baseline value. One hour into the run, the cache was started, and there is a gradual improvement seen in the TPS. After the configured cache is completely filled, there is no more room for new data to cache. At this point, the flash cache is serving the application read requests. Data not found in cache will be written to cache based on the caching algorithm. Blocks heavily accessed are seen as hot blocks. Blocks transition from cold to warm to hot state and then from hot to warm to cold state depending on their access pattern.
Figure 1: TPS improvement with server-side cache
For the run data showed in Figure 1, a cache size of 128 GB (that is, 10% of the database size) was configured with only 58 virtual users simulating a business application. Figure 2 shows the I/O activity recorded on the base storage system, and at 0:38:36 shows the increased read activity enabled by the sever-side cache system which leads to the TPS improvement.
Figure 2: Total storage I/O read rate improvement
In the tests documented above, a flash cache that is 10% of the database size was used. Table 1 shows relative TPS measurements made with larger cache partition sizes used.
Cache size in GB
Cache size % relative to DB size
Relative TPS improvement
Table 1: TPS throughput improvement with different server-side cache sizes
Note: The TPS throughput listed in Table 3 is observed in the controlled lab environment. The actual values for TPS might change depending on the workload and overall network traffic.
Detailed technical white paper : https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=TSW03489USEN&
IBM AIX Infocenter : ibm.com/support/knowledgecenter/ssw_aix_71/com.ibm.aix.osdevice/caching_configuring.htm
Visit IBM booth at Oracle OpenWorld and I'll be there in person if you would like to know more.
See you in San Francisco !!
Modified on by MandarVaidya
The growing shift toward cloud computing and need for flexibility are making hybrid cloud solutions a serious business imperative. Hybrid cloud done right is an effective, highly agile, cost saving alternative to traditional storage only. For many organizations, disaster recovery is a primary need and the debut use case for bringing public cloud into their environment.
We have built some exciting hybrid cloud scenarios using a winning combination of IBM Spectrum Accelerate family offerings and VMware. IBM Spectrum Accelerate is software defined storage (SDS) from proven enterprise-class XIV technology. True to the calling card of SDS, it deploys on heterogeneous hardware – and IBM makes it deployable in every possible way: in private cloud, hybrid cloud and public cloud solutions, including as a service. It runs on purpose-built or customer-chosen commodity servers, can be hosted on public cloud infrastructures such as IBM Bluemix (IBM SoftLayer®), and is even available by a third party vendor as a pre-installed appliance. It can be licensed to run on XIV, FlashSystems A9000, A9000R systems for long-term investment value. It is available as a service, IBM Spectrum Accelerate on Cloud, ordered through IBM Passport Advantage® and supported by IBM Lab Services, for deploying by the terabyte on IBM Cloud.
The design and mature technology underlying IBM Spectrum Accelerate offer a faster path to deploying and managing a hybrid cloud built for agility, ease of use and cost savings, providing:
- Advanced VMware-centric hybrid cloud solutions including disaster recovery with XIV systems
- Exceptional performance, availability and advanced features from proven technology
- An efficient hyper converged infrastructure managed with an award winning GUI and vCenter
- The ease of hosting, moving and managing workloads in a single pane hybrid cloud environment
Coming to InterConnect 2017, Las Vegas, USA? Visit us in IBM Systems booth #344 (20-22 March 2017) for exciting live hybrid cloud demos:
For detailed implementation and configuration details, register and attend session #5028 "Implementing Disaster Recovery Solution across Hybrid Cloud using IBM Spectrum Storage" on Wednesday, 22nd March 2017 at InterConnect 2017.
1) Hybrid cloud disaster recovery solution leveraging VMware Site Recovery Manager and IBM Spectrum storage. Diagram below will provide architectural overview of the demo we are showing.
We can either have IBM XIV storage or IBM Spectrum Accelerate storage at the protected site and IBM Spectrum Accelerate at the recovery site. The Spectrum Accelerate instance can be located in another rack in the other data center, or span two physically separate data centers or be in a public cloud such as IBM Bluemix.
This hybrid cloud solution uses a fully tested and certified IBM XIV Site Recovery Adapter (SRA) to deliver business continuity across a wide range of failures. It further brings flexibility to an organization by enabling migration of virtual infrastructure workloads between data center and cloud.
2) Orchestrated and automated storage provisioning using vRealize Automation and vRealize Orchestrator with IBM Spectrum Accelerate family offerings spanning hybrid cloud
In this solution, we use the IBM® Spectrum Control Base Edition integration with VMware vRealize Orchestrator and vRealize Automation to take the service around infrastructure beyond orchestration. IBM Spectrum Control Base Edition is a centralized cloud integration system that consolidates IBM storage provisioning, virtualization, cloud, automation, and monitoring solutions through a unified server platform. By using VMware’s Advanced Service Designer feature in vRealize Automation and vRealize Orchestrator, together with IBM Spectrum Accelerate, we show how you can deliver XaaS (Anything-as-a-Service) across hybrid cloud deployments to your users.
Modified on by Shashank Shingornikar
Dockerizing Oracle Database
Docker is a next buzzword on the net now. While a lot of work has been done in dockerizing various applications under docker softwares like Oracle still poses challenges during installation / configuration / execution. This blog entry gives the users a flavour in integrating Oracle Database for running in Dockerized environment.
Are you sure this can be done ?
Yes. Not only can be done, when its available you'll hardly notice any difference as compared to instance / database available on a bare metal / VM.
Should this be done ?
For running Oracle Database production Instances we suggest NOT. Oracle will not support this officially.
So, what are the ideal use case ?
The use cases for this could be, creating an environment for testing PSU or CPU patch provided by Oracle or think of it as an environment for your developers who has a specific requirement or even when you want to create a training environment. Even your experienced DBA's will find it handy as a sandbox for personal testing.
What does the flow looks like ?
In the POC environment we made this a multi step process for better understanding and granular control over the process. Here is the basic flow.
Platform Image (PI) is the base image on which customization's are done using Dockerfile. PI is read only in nature hence in order to customize it, Docker creates an intermediate container which holds only the changes made to PI. These intermediate containers when saved they'll save the state in the form of read-only images. The process continues till a ORACLE_HOME image is created containing Oracle binaries.
Database instances are spanned using this image. In order to create / save the database on a persistent storage, a data only container can be created. The data container together with container based on ORACLE_HOME will be used to create database instance and database underneath.
Here is how the final picture will look like. In this environment two 12c docarized database instances are running. The CDB database looks like the as shown in the picture above. In order to create / clone / plug in the PDB's a NFS mount point is mapped from Storwize V7000 system on Docker host.
What do you say are the advantages of this ?
There are several advantages of this. Although there are multiple images, each of this image being read only gives a consistent starting point every time a container is spawned on the image. Each container has its own namespace isolation consisting of PID, Network, Control groups (CGROUPS). Thus each container when running behaves like a independent host in-spite of the fact that it is based on same image. The images can be saved or pushed to an on premise repository thus making them readily available to another docker host. The images can be moved across various platforms such as servers across a network, laptop or even on public cloud space such as Amazon Web Services.
Modified on by SandeepZende
With the surge of smartphones and tablets is also the surge of applications available for storing and sharing data. Employees of many organization tend to use the applications of their choice which are comfortable to them to store and share data. This puts organizations at risk of data security and puts pressure on them to think of having an official application which will allows its user to store and share data securely and also have the control of its data.
To gain the control over data, organizations need a very robust, reliable and easy to integrate storage devices for its file sharing applications. Along with security and control over its data, organizations are looking for additional services or features which can enhance the productivity of its employees.
Citrix offers ShareFile—an enterprise-class, IT-managed, secure file sync and sharing service. ShareFile offers IT the ability to control sensitive corporate data while meeting the mobility and collaboration needs of users and the data security requirements of the enterprise.
Citrix provides multiple options to store the data be it on premise, in the cloud or a combination of both to meet the needs for data sovereignty, compliance, performance and costs. For organizations that require increased data protection, ShareFile offers customers the ability to encrypt data with their own encryption keys.
IBM has more than one option to take care of the on-premise storage solution and one being the answer to a highly scalable file sync and share solution and that is IBM Spectrum Scale. The other storage option is IBM Storwize V7000 Unified which provides a unique combination of file as well as block storage option to small and medium level file sync and share solutions.
ShareFile extends an organizations’ data strategy to include existing network file drives, SharePoint and OneDrive for Business - allowing a single point of access to all data sources. StorageZone Connectors makes it easy to securely access documents which otherwise cannot be accessed outside of corporate networks or on mobile devices. Access any enterprise content management (ECM) system with StorageZone Connectors SDK, expanding the types of data users can access and edit on-the-go via ShareFile.
Advanced security features including remote wipe, device lock, passcode protection, white/black listings and data expiration policies allow you to determine exactly how sensitive data is stored, accessed and shared. Track and log activity in real-time and create custom reports to meet compliance requirements.
While IBM Spectrum Scale brings the scalability and performance with it, it also can add the value by using the features below:
- File encryption and secure erase
- Transparent flash cache
- Network performance monitoring
- Active File Management (AFM) parallel data transfers
- NFS version 4 support and data migration
- Backup and restore improvements
- File Placement Optimizer (FPO) enhancements.
Other features of IBM Storwize V7000 Unified:
- IBM Storage Mobile Dashboard
- Dynamic Migration
- IBM Easy Tier
- Thin provisioning
- Flash drives
- Active File Management (AFM) parallel data transfers
- IBM HyperSwap
- IBM Real-time Compression
- Encryption for virtualized storage
Below is the high level flow diagram of the solution using IBM Spectrum Scale
For the high level overview of Citrix ShareFile and IBM Storage Systems, follow the link below:
For more information of a solution with IBM Spectrum Scale, follow the link below:
For more information of a solution with IBM Storwize V7000 Unified, follow the link below:
Disclaimer: Above are my personal thoughts and not necessarily of my employer.
Modified on by MandarVaidya
Everyone who works in mission critical environments understand the need of having effective disaster recovery solution. Organizations demand disaster recovery operations fully automated and could be executed in a repeatable manner making them always ready for disaster situations. In addition, organizations always demanded seamless migration of applications across the sites for planned activities.
What is IBM and VMware’s joint DR solution in a virtualized environment?
IBM SAN Volume Controller (SVC) stretched cluster with VMware Site Recovery Manager (SRM) providing support for stretch cluster (announcement link) is an ideal combination for disaster recovery solution using IBM Storwize Family Storage Replication Adapter (SRA). It offers customers the ability to survive a wide range of failures transparently by planning for disaster avoidance, disaster recovery and mobility. This solution also offers planned live migration of applications running on virtual machines across the sites by orchestrating cross vCenter vMotion operations, enabling zero-downtime application mobility.
IBM SVC is an industry leading storage virtualization solution that can virtualize storage devices practically from all other storage vendors. With stretched cluster implementation, customers can enjoy active-active configurations with servers and ESXi hosts can connect to storage cluster nodes at all sites. It helps to create balanced workloads across all nodes of clusters and provides disaster recovery capabilities in case of site failures.
VMware SRM can be seamlessly configured with IBM SVC stretched clusters using IBM Storwize Family SRAs. For configuring the solution, SVC nodes are set up in stretched cluster configuration with ESXi servers able to access storage across both the sites. Quorum site is set up as per IBM SVC stretched cluster configuration requirements to resolve tie-break situation in case of link failure between the two main sites. Each VMware vCenter server is configured to manage the ESXi servers at each site. VMware SRM is installed on each site to configure and automate the disaster recovery solution.
How to configure solution?
There are documents available individually describing IBM SVC stretched cluster and VMware Site Recovery Manager and their benefits and respective configuration details. Purpose of this blog is to touch key steps and guidelines required to configure solution together for planned and unplanned downtimes.
What configuration is needed on SVC?
- Configure SVC in a stretched cluster mode
SVC supports stretched cluster configuration for some time now. Stretched cluster implementation allows the configuration of two nodes in an I/O group which are separated by a distance between two locations. These two locations (sites) can be two racks in a data center, two buildings in a campus, or two labs between supported distances. A third site is configured to host a quorum device that provides an automatic tie-break in the event of a potential link failure between the two main sites.
- Configure mirrored volume on a SVC stretched cluster
In SVC, volume mirroring feature is used to keep two physical copies of a volume. Each volume can belong to a different pool. In case of stretched cluster feature, a mirrored volume can be configured from the external storages across two physically separated sites.
Any special need for vCenter and SRM installation for supporting this solution?
SRM stretch cluster support takes advantage of vSphere’s ability to perform vMotion across the sites and across the vCenter server instances. Therefore, the two vCenter server instances will need to be configured (at protected and recovery sites) in enhanced linked mode to enable cross vCenter vMotion.
- SRM installation at protected and recovery sites
Install SRM server instances at protected and recovery sites and register SRM server instances with Platform Service Controllers at each site respectively.
Where does IBM SRA come into picture?
IBM Storwize Family SRA is a software add-on that integrates with SRM to run the failover. It extends SRM capabilities and uses replication and mirroring as part of the SRM comprehensive Disaster Recovery Planning (DRP) solution. IBM Storwize Family SRA is installed at protected and recovery site and it works with SRM instance to run failovers.
What’s new while creating vSphere storage policy?
Site Recovery Manager 6.1 adds a new type of protection group which is a storage policy-based protection group. Storage policy-based protection groups use vSphere storage profiles to identify protected datastores and virtual machines. They automate the process of protecting and unprotecting virtual machines and adding and removing datastores from protection groups. In order to easily identify IBM storage objects in vSphere inventory, you can create an IBM storage tag to create tag rule based storage policy and then associate stretched datastore to a storage policy using IBM storage tag based rules.
How to configure SRM for this solution?
- After pairing the sites together, IBM Storwize Family SRA should be registered with the SRM server instances at primary and recovery sites and then configure array manager using SVC nodes.
- Configure bidirectional Network Mappings, Folder Mappings, Resource Mappings, and Placeholder Datastores Mappings between protected and recovery sites.
- NEW ⇒ SRM 6.1 allows you to configure storage policy based protection group using storage policy mappings. When the storage policy at the protected site is mapped to storage policy at the recovery site, SRM places the recovered virtual machines in the vCenter server inventory and on datastores on the recovery site according to the storage policies that is mapped to on the recovery site.
- NEW ⇒ Storage policy based protection group enables automated protection of virtual machines that are associated with a storage policy which in turn are created by tagging them to reside on a particular datastore. When a virtual machine is associated or disassociated with a storage policy, SRM automatically protects or unprotects it.
- Configure a recovery plan using storage policy based protection group.
Why to test recovery plan?
The tested recovery plan make the environment ready for disaster recovery situations by running almost every aspect of a recovery plan. It is strongly recommended to test the recovery plan for planned migration and disaster recovery situations to avoid surprises.
Okay. I've recovery plan but what’s next??
Failover and reprotect recovery plan: After successfully testing a recovery plan, recovery plan is ready for either planned failover or disaster recovery situations. After fail over, recovery site becomes primary. SRM provides reprotect function to provide automated protection in a reverse direction.
Hopefully above steps will give overview of various configuration steps required to setup a solution and plan accordingly. For additional details related to the configuration, refer technical guide Implementing disaster recovery using IBM SAN Volume Controller and VMware Site Recovery Manager.
Disclaimer : These are my personal views and do not necessarily reflect that of my employer.
Modified on by EricJohnson
For information technology (IT) customers looking to control site expansion costs, Microsoft offers their Azure cloud services. To appeal to larger customers with existing disaster recovery (DR) models that span multiple sites and use SAN solutions, Microsoft recently added Azure Site Recovery to their cloud services mix. This allows Microsoft to target a full spectrum of potential customers for their cloud services. For small customers that cannot afford the costs (both CAPEX and OPEX) associated with additional sites, traditional Azure services meet their DR requirements. However, in order to increase further Azure business revenue, Microsoft realized they needed to attract more large businesses with existing SAN infrastructures by appealing to cost-conscious CIOs facing common IT budget constraints. In essence, Microsoft cloud services continues to appeal to smaller customers who can not add data centers or sites and larger customers who wish to control site sprawl. Why bother with the cost and management headaches associated with maintaining additional sites for disaster recovery to meet customer service level agreements by protecting business critical data and services, when you can let Microsoft protect them for you and save your money and sanity for other high priority business needs? That is where Microsoft Azure and ASR services come into play.
Heralded as Microsoft’s cloud computing platform, Azure provides a simple, reliable, and extensible web-based interface or front end that is tightly integrated with a Microsoft System Center VMM and SQL Server back end. While the overall Azure model is multi-tiered, think of it, more or less, as a web management portal that uses Internet Information Services (IIS) at its foundation with VMM as the engine that drives its cloud tasks. VMM in turn, stores all of the cloud configuration and environment data in a SQL Server database. Of course, the Azure cloud itself is based on the Microsoft System Center application suite and consists of a Microsoft global network of secure data centers that offer compute, storage, network, and application resources to help protect your data and offset the high availability and administrative costs of building and managing additional sites. Even though Azure has multiple tiers, the storage array aspects of Azure Site Recovery using SAN replication for on-premises clouds and how replication differs from traditional Hyper-V replica implementations is the primary focus of this blog.
The Hyper-V replica feature is designed to protect VMs hosted by different servers using a built-in replication mechanism at the VM level. A primary site VM can asynchronously replicate to a designated replica site using an Ethernet network infrastructure including local area networks (LAN) or wide area networks (WAN). The designated replica remains offline in a stand-by state pending planned or unplanned VM failovers. After the initial VM copy is replicated to the secondary site, asynchronous replication occurs for only the primary VM changes. This network-based replication does not require shared storage or specific storage hardware and Hyper-V replicas can be established between stand-alone or highly available (HA) VMs, or a combination of both. The Hyper-V servers can be geographically dispersed and the VMs are not even required to belong to a domain. Thus, the Hyper-V replica requirements are rather basic and easy to implement yet are restricted to asynchronous network replication only.
Until recently, Azure could only leverage Hyper-V replicas using a network replication channel but now can use SAN replication between two on-premises VMM sites or clouds. With the addition of a Hyper-V replica SAN replication channel, synchronous replication can be used to eliminate asynchronous lag times and multiple VM consistency is possible using Azure Site Recovery. However, it is important to realize that asynchronous SAN replication behavior is similar to asynchronous Hyper-V network replication because after the initial VM copy is replicated to the secondary site, asynchronous SAN replication occurs for only the primary VM changes. However, if using IBM Real-time Compression, performance gains are also realized because less data is required to replicate over the SAN. No matter the storage options such as compression, with just a few clicks in the Azure Site Recovery management portal, simple orchestration of IBM XIV replication and disaster recovery for Microsoft Hyper-V environments can be automated in the form of planned and unplanned site failovers. In a practical sense, this collection of Azure SAN replication enhancements and disaster recovery functionality is an extension of past Microsoft System Center VMM storage automation features.
So with the introduction of Microsoft ASR cloud services, larger customers have the option to provide DR for their private clouds using IBM XIV SAN replication but they also can take advantage of hybrid cloud protection by subscribing to Microsoft Azure services. This services model gives customers the opportunity to protect their existing data center and SAN infrastructure investments while enticing them to purchase additional Microsoft Azure cloud services. Refer to Figure 1 below for a general Microsoft cloud layout that uses ASR with IBM XIV SAN replication:
Figure 1: Microsoft Azure Site Recovery using IBM XIV SAN replication general lab configuration
For further information about Microsoft ASR using IBM XIV Storage System Gen3, including step-by-step configuration processes, please refer to the following white paper:
Modified on by UdayasuryanKodoly
Real-time collaboration and information sharing are key drivers of an enterprise’s productivity and innovation. Finding solutions to enable such dynamic sharing in an enterprise setting while maintaining control, however, can be a challenge. Some organizations look to consumer-grade, cloud-based file sharing options that offer the scalability, ease of use and access users want but store sensitive company data on external servers. This exposes organizations to risks of data leaks while limiting IT visibility. Other options include using existing enterprise collaboration and content management systems that might be challenging to maintain and cumbersome for users.
What exactly is the solution?
The combined IBM® Spectrum Scale for object storage and ownCloud software technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution. The ownCloud provides universal file access through a common file access layer to the IBM Spectrum Scale for object storage. The data files are kept in on-premise Spectrum Scale for object storage. ownCloud allows enterprises IT organizations to regain control of sensitive data with managed file sync and share which gives users universal file access to all of their data:
- Manage and protect data on-premise – Using IBM Spectrum Scale for object storage, with the complete software stack running on servers inside the data center, controlled by trusted administrators, managed to established policies.
- Integrate with existing IT system resources and policies – such as authentication systems, user directories, governance workflows, intrusion detection, monitoring, logging and storage management.
- Provide access through a comprehensive set of application programming interfaces ( API) and mobile libraries to customize system capabilities, meet unique service requirements, and accommodate changing user needs.
Why enterprises wants on-premise files sync and share solution?
Storing data off-premise may strip an organization’s ability to manage and control its data, or to ensure that data can be deleted. Few enterprises, however, are willing to forgo the benefits that cloud services provide in the advancement of agility and improved business processes. That leaves them struggling with how to use these technologies without importing security risks. They also recognize that users are increasingly able to migrate to external services that provide them greater flexibility and mobility than that offered by the enterprise.
By retaining on-premises manageability of file sync and share services, though, IT can use a private cloud solution to reconcile the need for cloud technology with the requirements for security, privacy, and regain control of sensitive data without unwanted exposure. With the ability to enhance control and govern access to files, IT administrators can set sophisticated rules for user and device connections and prevent access based upon those rules. Further, the capabilities and extensibility of on-premise file sync and share match the ease of use and complete access that first drove consumption of cloud services, yet IT controls sensitive assets in its own cloud environment.
Solution Lab testing
This solution consists of multiple servers installed with ownCloud server software. The ownCloud is a PHP web application running on top of Apache on Linux. This PHP application manages every aspect of ownCloud, from user-management to plug-ins, file sharing and storage. Attached to the PHP application is a database where ownCloud stores user information, user-shared file details, plug-in application states, and the ownCloud file cache (a performance accelerator). ownCloud accesses the database through an abstraction layer, enabling support for Oracle, MySQL, SQL Server, and PostgreSQL. Complete webserver logging is provided through webserver logs, and user and system logs are provided in a separate ownCloud log, or can be directed to a syslog file.
In the lab testing environment, an Active Directory (AD) is integrated with the ownCloud for user account provisioning. IBM Spectrum Scale for object storage is configured with local authentication. It is possible to configure IBM Spectrum Scale for object storage with enterprise directory server such as AD or Lightweight Directory Access Protocol (LDAP).
OpenStack Swift is installed on the protocol node(s) of the IBM Spectrum Scale for object storage.
IBM Spectrum Scale is a proven, enterprise-class file system, and OpenStack Swift is a best-of-breed object-based storage system. IBM Spectrum Scale for object storage combines these technologies to provide a new type of cloud storage that includes efficient data protection and recovery, proven scalability, and performance; snapshot and backup and recovery support; and information lifecycle management. Through these features, IBM Spectrum Scale for object storage can help simplify data management and allow enterprises to realize the full value of their data.
ownCloud is a self-hosted file sync and share server. It provides access to on-premises data through a web interface, sync clients while providing a platform to view, sync and share across devices easily, while gives the enterprises the ability to manage and control their data. ownCloud’s open architecture is extensible through a simple but powerful APIs for applications and plug-ins and works with seamlessly with IBM Spectrum Scale for object storage.
The combined IBM Spectrum Scale for object storage and ownCloud server technologies helps enterprises to build highly scalable, secure, and flexible on-premise file sync and share solution.
To learn more about the solution, please see the solution technical paper: https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud
Modified on by Shashank Shingornikar
How far have you gone before in tuning your database ? be it Oracle or DB2. The efforts are never good enough and before you breathe easy the battle begins .. again and again ...
Now you can relax a bit ... With IBM Easy Tier Server functionality available with Easy Tier, you'll be able to get more work done in terms of improvement in transactions per second (TPS).
So what exactly is Easy Tier Server ?
IBM Easy Tier Server is a unified storage caching and tiering solution across AIX servers and supported direct-attached storage (DAS) flash drives. Easy Tier Server allows the most frequently accessed or “hottest” data to be placed (cached) closer to the hosts, thus overcoming the SAN latency. The Easy Tier Server core relies on DS8870 cooperating with heterogeneous hosts to make a global decision on which data to copy to the hosts’ local SSDs, for improved application response time. Therefore, DAS SSD devices play an important role in an Easy Tier Server implementation. Specializing in high I/O performance, SSD cache has the upper hand in cost per input/output operations per second (IOPS).
The Easy Tier technology has evolved over years and is now in its fifth generation. Easy Tier Server is one of several Easy Tier enhancements, introduced with the DS8000 Licensed Machine Code 7.7.10.xx.xx. Both Easy Tier and Easy Tier Server licenses, although required, are available at no cost.
Which workloads are best fit for Easy Tier Server ?
Because Easy Tier Server implements a read-only local DAS cache on the hosts, there are some particular scenarios that can take the best advantage of this feature. These are
- Real-time analytics workload
- Large content data
- Online transaction processing (OLTP) workload
- Virtual machine (VM) consolidation
- Big Data
Under the hood
The Easy Tier Server feature consists of two major components
- The Easy Tier Server coherency server
The Easy Tier Server coherency server runs in the DS8870 and manages how data is placed onto the internal flash caches on the attached hosts. Also, it integrates with Easy Tier data placement functions for the best optimization on DS8870 internal tiers (SSD, Enterprise, and Nearline). The coherency server asynchronously communicates with the hosts system (the coherency clients) and generates caching advice for each coherency client, which is based on Easy Tier placement and statistics.
- The Easy Tier Server coherency client
The Easy Tier Server coherency client runs on the host system and keeps local caches on DAS solid-state drives. The coherency client uses the Easy Tier Server protocol to establish system-aware caching that interfaces with the coherency server. An Easy Tier Server coherency client driver cooperates with the operating system to direct I/Os either to local DAS cache or to DS8870, in a transparent way to the applications.
The POWER system has a DAS attached which is used by Easy Tier Server Coherency Client Driver to create local cache. Easy Tier Server coherency clients are designed to route I/O read hits to the application host DAS, while sending read misses directly to DS8870. In the same way, the write I/Os are routed to DS8870 and cache pages related to the I/O address spaces are invalidated on the client’s local cache to keep cache coherency and data integrity. The coherency client and coherency server share statistics to ensure that the best caching decisions are made.
And the bottom line is ?
In the lab a brokerage OLTP workload was executed simulating maximum amount of read requests. In the beginning of the run the hdisks configured for the ASM DATA disk group showed maximum utilization as no caching was enabled. 60 minutes in the run, the caching was enabled on the database host running the workload. Soon after the caching was enabled, the Easy Tier Server starts migration of hot extents from DS8870 to the database host running the Easy Tier Server coherent client. Over a period, as more and more hot extents are migrated from DS8870, maximum activity was observed on cache devices and lesser activity from DS8870 storage. As more and more extents (containing required data) were cached, the read operations requests were satisfied locally thus eliminating the need to read data from storage. The effective utilization of locally cached data showed 100% improvement in the TPS observed during the test run.
Whether it is a latency sensitive environment, high read/write ratio applications, or a highly parallel processing system, there is an increasing need to process data quickly and Easy Tier Server can be considered for these situations.In cases where the read performance of the storage can lead to a major bottleneck to the environment, there is a high value in faster storage, and therefore, a good fit for the Easy Tier Server.
Publications and Resources
A white paper WP102534 is available on IBM Techdocs website that provides detailed information on the testing effort.
IBM System Storage DS8870: Architecture and Implementation, SG24-8085
IBM System Storage DS8000 Host Attachment and Interoperability, SG24-8887
IBM System Storage DS88870 Product Guide
IBM System Storage DS8000 Easy Tier, REDP-4667
IBM System Storage DS8000 Easy Tier Heat Map Transfer, REDP-5015
IBM System Storage DS8000: Easy Tier Application, REDP-5014
Views / thoughts expressed above are my own, not necessarily of my employer.