JeffHebert 060001UEQ2 Tags:  based cloud storage block saas nas file san hds iaas paas hp 4,053 Views
IBM® System Storage™ N series with Operations Manager software offers comprehensive monitoring and management for N series enterprise storage and content delivery environments. Operations Manager is designed to provide alerts, reports, and configuration tools from a central control point, helping you keep your storage and content delivery infrastructure in-line with business requirements for high availability and low total cost of ownership.
We focus especially on Protection Manager, which is designed as an intuitive backup and replication management software for IBM System Storage N series unified storage disk-based data protection environments. The application is designed to support data protection and help increase productivity with automated setup and policy-based management.
This IBM Redbooks® publication demonstrates how Operation Manager manages IBM System Storage N series storage from a single view and remotely from anywhere. Operations Manager can monitor and configure all distributed N series storage systems, N series gateways, and data management services to increase the availability and accessibility of their stored and cached data. Operations Manager can monitor the availability and capacity utilization of all its file systems regardless of where they are physically located. It can also analyze the performance utilization of its storage and content delivery network. It is available on Windows® , Linux® , and Solaris™ .
OctobeIBM Storwize V7000 Unified Disk System The most powerful and easy-to-use innovative disk system in the storage marketplacer 14, 2011 5:54 PM
JeffHebert 060001UEQ2 Tags:  storage san disk enterprise ibm nas fc midrange ssd fibre 3,304 Views
JeffHebert 060001UEQ2 Tags:  consolidate san nas ibm available compression virtualize sstorage reliable scalable disk 2,795 Views
New SONAS release offers enhanced performance
Businesses continue to search for storage solutions that save money without sacrificing performance. Last year, IBM introduced Scale Out Network Attached Storage (SONAS), the industry’s first such network-attached storage (NAS) offering to address this business need. SONAS is an enterprise class, NAS system that provides extreme scalability, availability and security—and does so with record-breaking performance. It’s designed as a single global repository to manage multiple petabytes of storage and billions of files all under one file system.
In April, IBM announced significant performance enhancements to SONAS: improved information lifecycle management (ILM), hierarchical storage management (HSM) as well as ease of deployment and antivirus integration.
Todd Neville, SONAS program leader at IBM, says SONAS is unique in that it can very near-linearly scale to almost any performance level. With SONAS, he says, “You can build a system that’s as fast as you want it to be; but it’s not just about absolute size, it’s also about bang for your buck. We’ve significantly increased the software performance in our upcoming release 1.2, so customers see a significant performance increase on their current platform with no additional costs.”
Funda Eceral, SONAS market segment manager at IBM, says SONAS is the only true scale-out NAS system available in the marketplace. “While you can nondisruptively add capacity with storage building blocks,” Eceral says, “you can also still continue to independently scale out your I/O performance with interface nodes. It brings operational efficiency and extraordinary utilization rates for each customer.”
Three Key Features
This version of SONAS offers three key features, according to Neville:
“Everyone says, ‘We do tiering, HSM and ILM,’ but design matters—IBM does it differently.” —Todd Neville, SONAS program leader, IBM“Everyone says, ‘We do tiering, HSM and ILM,’ but design matters—IBM does it differently.” —Todd Neville, SONAS program leader, IBM
Next Page >>
JeffHebert 060001UEQ2 Tags:  enterprise compression time storage san de-duplication nas real 2,748 Views
Originating Author: David Vellante
Co-author: David Floyer
Tip: ctrl +/- to increase/decrease text size
There has been significant discussion in the industry about storage optimization and making better use of storage capacity. A number of storage vendors have successfully marketed data de-duplication for offline/backup applications, reducing the volume of backup data by a factor of 5-15:1, according to Wikibon user input.
Data de-duplication as applied to backup use cases is different from compression in that compression actually changes the data using algorithms to create a computational byproduct and write fewer bits. With de-duplication, data is not changed, rather copies 2-N are deleted and pointers are inserted to a 'master' instance of the data. Single-instancing can be thought of as synonymous with de-duplication.
Traditional data de-duplication technologies however are generally unsuitable for online or primary storage applications because the overheads associated with the algorithms required to de-duplicate data will unacceptably elongate response times. As an example, popular data de-duplication solutions such as those from Data Domain, ProtecTier (Diligent/IBM), Falconstor and EMC/Avamar are not used for reducing capacities of online storage.
There are three primary approaches to optimizing online storage, reducing capacity requirements and improving overall storage efficiencies. Generally, Wikibon refers to these in the broad category of on-line or primary data compression, although the industry will often use terms like de-duplication (e.g. NetApp A-SIS) and single instancing. These data reduction technologies are illustrated by the following types of solutions:
Unlike some data reduction solutions for backup, these three approaches use lossless data compression algorithms, meaning mathematically, bits can always be reconstructed.
Each of these approaches has certain benefits and drawbacks. The obvious benefit is reduced storage costs. However each solution places another technology layer in the network and increases complexity and risk.
Array-based data reduction
Array-based data reduction technologies such as A-SIS operate in-line as data is being written to reduce primary storage capacity. The de-duplication feature of WAFL (NetApp’s Write Anywhere File Layout) allows the identification of duplicates of a 4K block at write time (creating a weak 32-bit digital signature of the 4K block, which is then compared bit-by-bit to ensure that there is no hash collision) and placed into a signature file in the metadata. The work of identifying the duplicates is similar to the snap technology and is done in the background if controller resources are sufficient. The default is once every 24 hours and every time the percentage of changes reaches 20%.
In addition, there are three main disadvantages of an A-SIS solution, including:
IT Managers should note that A-SIS is included as a no-charge standard offering within NetApp's Nearline component of ONTAP, the company's storage OS.
Host-managed offline data compression solutions
Ocarina is an example of a host-managed data reduction offering or what it calls 'split-path.' It consists of an offline process that reads files through an appliance, compresses those files and writes them back to disk. When a file is requested, another appliance re-hydrates data and delivers it to the application. The advantage of this approach is much higher levels of compression because the process is offline and uses many more robust algorithms. A reasonable planning assumption of reduction ratios will range from 3-6:1 and sometimes higher for initial ingestion and read-only Web environments. However, because of the need to re-hydrate when new data is written, classical production environments may see lower ratios.
In the case of Ocarina, the company has developed proprietary algorithms that can improve reduction ratios on many existing file types (e.g. jpeg, pdf, mpeg, etc), which is unique in the industry.
The main drawbacks of host-managed data reduction solutions are:
On balance, solutions such as Ocarina are highly suitable and cost-effective for infrequently accessed data and read-intensive applications. High update environments should be avoided.
In-line data compression
IBM Real-time Compression offers in-line data compression whereby a device sits between servers and the storage network (see Shopzilla's architecture). Wikibon members indicate a compression ratio of 1.5-2:1 is a reasonable rule-of-thumb.
The main advantage of the IBM Real-time Compression approach is very low latency (i.e. microseconds) and improved performance. Storage performance is improved because compression occurs before data hits the storage network. As a result, all data in the storage network is compressed, meaning less data is sent through the SAN, cache, internal array, and disk devices, minimizing resource requirements and backup windows by 40% or more, according to Wikibon estimates.
There are two main drawbacks of the IBM Real-time Compression approach, including:
On balance, the advantages of an Ocarina or IBM Real-time Compression approach are they can be applied to any file-based storage (i.e. heterogeneous devices). NetApp and other array-based solutions lock customers into a particular storage vendor but have certain advantages as well. For example, they are simpler to implement because they are already integrated.
An Ocarina approach is best applied in read-intensive environments where it will achieve better reduction ratios due to its post-process/batch ingestion methodology. IBM Real-time Compression will achieve the highest levels of compression and ROI in general purpose enterprise data centers of 30TB's or greater.
Action Item: On-line data reduction is rapidly coming to mainstream storage devices in your neighborhood. Storage executives should familiarize themselves with the various technologies in this space and demand that storage vendors apply capacity optimization techniques to control storage costs.
Footnotes: RELATED RESEARCH
Storage Efficiency through Real-time Data Compression for the Entire Data Lifecycle
Agnostic to Applications and Storage
IBM Real-time Compression appliances reduce storage capacity utilization by up to 80% without performance degradation. IBM Real-time Compression appliances increase the capacity of existing storage infrastructure helping organizations meet the demands of rapid data growth while also enhancing storage performance and utilization. The result is unprecedented cost savings, ROI, operational and environmental efficiencies.
The IBM Real-time Compression appliances address data optimization on primary storage so your capacity is optimized across all tiers of storage. The IBM Real-time Compression Appliance STN6500 and STN6800 align to your existing storage networking configuration for easy installation. The appliances install transparently in front of your existing NAS storage and thru patented real-time compression reduces the size of every file created. Read more>
JeffHebert 060001UEQ2 Tags:  storage hp real nas time compression san ibm netapp emc 2,658 Views
Shopzilla has been a customer of the IBM Real-time Compression technology for over 2 years. Here they describe the benefits of the technology.Shopzilla-IBM Real Time Compression is Transparent
JeffHebert 060001UEQ2 Tags:  san dedupe storage reliable cloud capacity scalable available virtualize ibm nas dat 2,565 Views
JeffHebert 060001UEQ2 Tags:  san iaas cloud paas mid saas range nas enterprise storage 2,533 Views
JeffHebert 060001UEQ2 Tags:  available reliable san scalable storage ibm systems nas 2,523 Views
Manage storage more effectively with virtualizationcapabilities from IBM
As the need for data storage continues to spiral upward, tradi-tional physical approaches to storage management becomeincreasingly problematic. Physically expanding the storage environment can be costly, time-consuming and disruptive—especially when it has to be done again and again in responseto ever-growing storage demands. Yet manually improving stor-age utilization to control growth can be challenging. Physicalinfrastructures can also be inflexible at a time when businessesneed to be able to make ever-more rapid changes in order tostay competitive.The alternative is a virtualized approach in which storage virtualization software presents a “view” of storage resources toservers that is different from the actual physical hardware inuse. This logical view can hide undesirable characteristics ofstorage while presenting storage in a more convenient mannerfor applications. For example, storage virtualization may presentstorage capacity as a consolidated whole, hiding the actualphysical boxes that contain the storage. In this way storagebecomes a logical pool of resources that exists virtually, regard-less of where the actual physical storage resources are locatedin the larger information infrastructure. These software-definedvirtual resources are easier and less disruptive to change andmanage than hardware-based physical storage devices, sincethey don’t involve moving equipment or making physical con-nections. As a result, they can respond more flexibly anddynamically to changing business needs. Similarly, the flexibilityafforded by virtual resources makes it easier to match storageto business requirements.Learn More>