JeffHebert 060001UEQ2 Tags:  ssd range disk iaas mid enterprise storage v7000 ibm svc 3,157 Views
“Procedures for replacing or adding nodes to an existing cluster”
Scope and Objectives
The scope of this document is two fold. The first section provides a procedure for replacing existing nodes in a SVC cluster non-disruptively. For example, the current cluster consists of two 2145-8F4 nodes and the desire is to replace them with two 2145-CF8 nodes maintaining the cluster size at two nodes. The second section provides a procedure to add nodes to an existing cluster to expand the cluster to support additional workload. For example, the current cluster consists of two 2145-8G4 nodes and the desire is to grow it to a four node cluster by adding two 2145-CF8 nodes.
The objective of this document is to provide greater detail on the steps required to perform the above procedures then is currently available in the SVC Software Installation and Configuration Guide, SC23-6628, located at www.ibm.com/storage/support/2145. In addition, it provides important information to assist the person performing the procedures to avoid problems while following the various steps.
Section 1: Procedure to replace existing SVC nodes non-disruptively
You can replace SAN Volume Controller 2145-8F2, 2145-8F4, 2145-8G4, and 2145-8A4 nodes with SAN Volume Controller 2145-CF8 nodes in an existing, active cluster without taking an outage on the SVC or on your host applications. In fact you can use this procedure to replace any model node with a different model node as long as the SVC software level supports that particular node model type. For example, you might want to replace a 2145-8F2 node in a test environment with a 2145-8G4 node previously in production that just got replaced by a new 2145-CF8 node.
Note: If you are attempting to replace existing 2145-4F2 nodes with new 2145-CF8 nodes do not use this procedure as you must use the procedure specifically for this sort of upgrade located at the following URL:
This procedure does not require changes to your SAN environment because the new node being installed uses the same worldwide node name (WWNN) as the node you are replacing. Since SVC uses this to generate the unique worldwide port name (WWPN), no SAN zoning or disk controller LUN masking changes are required. READ MORE>
OctobeIBM Storwize V7000 Unified Disk System The most powerful and easy-to-use innovative disk system in the storage marketplacer 14, 2011 5:54 PM
JeffHebert 060001UEQ2 Tags:  storage san disk enterprise ibm nas fc midrange ssd fibre 2,755 Views
JeffHebert 060001UEQ2 Tags:  feature storage enterprise ds8000 systems z free 2 Comments 2,589 Views
JeffHebert 060001UEQ2 Tags:  mid hitachi range enterprise storage hp ibm cluster emc performance cloud 2,130 Views
Technology giant IBM on Tuesday said it has emerged as the top player in the Indian external disk storage systems for the year 2010.
According to IT research firm IDC, IBM India has maintained its 2010 leadership with a 26.2 per cent market share (in revenue terms) and over four per cent points lead over its nearest competitor.
“While the overall external disk storage market in India declined to 1.5 per cent in calender year 2010, according to IDC, IBM has been able to grow its hold in the country given the constant innovation and focus on bringing in storage efficiency,” Sandeep Dutta, Storage, Systems and Technology Group, IBM India/ South Asia told PTI.
Also, in Q4 2010, IBM maintained leadership with a 29 per cent market share and a seven per cent point lead over its nearest competitor in revenue terms.
During the year 2010, IBM launched products like IBM StorwizeV7000 and IBM System Storage DS8000, which helped it to strengthen its leadership position in the market.
During the year, IBM bagged orders from Kotak, Suzlon, Oswal mills, CEAT, L&T (ECC division), Indian Farmer and Fertilizer Cooperative Ltd, Solar Semiconductors and Ratnamani Metals. Read More>
JeffHebert 060001UEQ2 Tags:  enterprise compression time storage san de-duplication nas real 1,965 Views
Originating Author: David Vellante
Co-author: David Floyer
Tip: ctrl +/- to increase/decrease text size
There has been significant discussion in the industry about storage optimization and making better use of storage capacity. A number of storage vendors have successfully marketed data de-duplication for offline/backup applications, reducing the volume of backup data by a factor of 5-15:1, according to Wikibon user input.
Data de-duplication as applied to backup use cases is different from compression in that compression actually changes the data using algorithms to create a computational byproduct and write fewer bits. With de-duplication, data is not changed, rather copies 2-N are deleted and pointers are inserted to a 'master' instance of the data. Single-instancing can be thought of as synonymous with de-duplication.
Traditional data de-duplication technologies however are generally unsuitable for online or primary storage applications because the overheads associated with the algorithms required to de-duplicate data will unacceptably elongate response times. As an example, popular data de-duplication solutions such as those from Data Domain, ProtecTier (Diligent/IBM), Falconstor and EMC/Avamar are not used for reducing capacities of online storage.
There are three primary approaches to optimizing online storage, reducing capacity requirements and improving overall storage efficiencies. Generally, Wikibon refers to these in the broad category of on-line or primary data compression, although the industry will often use terms like de-duplication (e.g. NetApp A-SIS) and single instancing. These data reduction technologies are illustrated by the following types of solutions:
Unlike some data reduction solutions for backup, these three approaches use lossless data compression algorithms, meaning mathematically, bits can always be reconstructed.
Each of these approaches has certain benefits and drawbacks. The obvious benefit is reduced storage costs. However each solution places another technology layer in the network and increases complexity and risk.
Array-based data reduction
Array-based data reduction technologies such as A-SIS operate in-line as data is being written to reduce primary storage capacity. The de-duplication feature of WAFL (NetApp’s Write Anywhere File Layout) allows the identification of duplicates of a 4K block at write time (creating a weak 32-bit digital signature of the 4K block, which is then compared bit-by-bit to ensure that there is no hash collision) and placed into a signature file in the metadata. The work of identifying the duplicates is similar to the snap technology and is done in the background if controller resources are sufficient. The default is once every 24 hours and every time the percentage of changes reaches 20%.
In addition, there are three main disadvantages of an A-SIS solution, including:
IT Managers should note that A-SIS is included as a no-charge standard offering within NetApp's Nearline component of ONTAP, the company's storage OS.
Host-managed offline data compression solutions
Ocarina is an example of a host-managed data reduction offering or what it calls 'split-path.' It consists of an offline process that reads files through an appliance, compresses those files and writes them back to disk. When a file is requested, another appliance re-hydrates data and delivers it to the application. The advantage of this approach is much higher levels of compression because the process is offline and uses many more robust algorithms. A reasonable planning assumption of reduction ratios will range from 3-6:1 and sometimes higher for initial ingestion and read-only Web environments. However, because of the need to re-hydrate when new data is written, classical production environments may see lower ratios.
In the case of Ocarina, the company has developed proprietary algorithms that can improve reduction ratios on many existing file types (e.g. jpeg, pdf, mpeg, etc), which is unique in the industry.
The main drawbacks of host-managed data reduction solutions are:
On balance, solutions such as Ocarina are highly suitable and cost-effective for infrequently accessed data and read-intensive applications. High update environments should be avoided.
In-line data compression
IBM Real-time Compression offers in-line data compression whereby a device sits between servers and the storage network (see Shopzilla's architecture). Wikibon members indicate a compression ratio of 1.5-2:1 is a reasonable rule-of-thumb.
The main advantage of the IBM Real-time Compression approach is very low latency (i.e. microseconds) and improved performance. Storage performance is improved because compression occurs before data hits the storage network. As a result, all data in the storage network is compressed, meaning less data is sent through the SAN, cache, internal array, and disk devices, minimizing resource requirements and backup windows by 40% or more, according to Wikibon estimates.
There are two main drawbacks of the IBM Real-time Compression approach, including:
On balance, the advantages of an Ocarina or IBM Real-time Compression approach are they can be applied to any file-based storage (i.e. heterogeneous devices). NetApp and other array-based solutions lock customers into a particular storage vendor but have certain advantages as well. For example, they are simpler to implement because they are already integrated.
An Ocarina approach is best applied in read-intensive environments where it will achieve better reduction ratios due to its post-process/batch ingestion methodology. IBM Real-time Compression will achieve the highest levels of compression and ROI in general purpose enterprise data centers of 30TB's or greater.
Action Item: On-line data reduction is rapidly coming to mainstream storage devices in your neighborhood. Storage executives should familiarize themselves with the various technologies in this space and demand that storage vendors apply capacity optimization techniques to control storage costs.
Footnotes: RELATED RESEARCH
JeffHebert 060001UEQ2 Tags:  hp enterprise ibm efficient range emc effective mid storage performance 1,874 Views
"As the world becomes more interconnected, instrumented and intelligent, more and more information is created. This influx of information creates both challenges and opportunities. Companies must build smarter information infrastructures that can handle all of this information and manage it intelligently. IBM has invested billions of dollars developing smart storage solutions that embody a set of essential technologies: virtualization, thin provisioning, deduplication, compression and automated tiering that will enable you to manage the influx of information and unlock new business opportunities."
In many IT departments, increased user demand has led to haphazard storage growth, resulting in sprawling, heterogeneous storage environments. These environments make it difficult to achieve optimal utilization and to provision storage capacity for new users and applications. Storage virtualization can put an end to these problems. It enables companies to logically aggregate disk storage so capacity can be efficiently allocated across applications and users.
JeffHebert 060001UEQ2 Tags:  range tier performance storwize ibm mid storage enterprise 1,837 Views
"Since October 2010 IBM Corp. announced workload-optimized systems to help companies manage a range of more demanding workloads that are placing new stresses on already over-taxed data centers.
The offerings, which span IBM's systems portfolio, represent IBM's investment in systems integrated and optimized across chips, hardware and software, for a range of work at a time when companies face amounts of data and are under pressure to become more efficient in managing and drawing timely insights from the information.
The new systems include: A new offering for the zEnterprise BladeCenter Extension (zBX), IBM's systems design that allows workloads on mainframe servers and other select systems to share resources and be managed as a single, virtualized system; and key new Storage and System x products, which can bring new levels of efficiency to the data center."
IBM Selected to Manage First Phase of New York City's Data Center Consolidation and Modernization Program
JeffHebert 060001UEQ2 Tags:  capacity performance service storage cloud enterprise ibm 1,743 Views
Project to Streamline IT Infrastructure to Improve Service Delivery, Reduce Energy Consumption and Strengthen Security
NEW YORK, N.Y. - 31 Jan 2011: IBM (NYSE: IBM) today announced that it has been selected by the City of New York to build a more efficient, smarter technology platform for CITIServ, the City's IT infrastructure modernization program. The goal of the project is to streamline delivery of City services by consolidating and updating outdated and incompatible IT, thereby reducing energy consumption, strengthening security, and providing City workers with faster access to the latest technologies.
JeffHebert 060001UEQ2 Tags:  iaas storage compression range ibm enterprise scalable reliable available dedupe paas saas disk mid 1,659 Views
ProtecTier deduplication offers 25-to-1 reduction and online backup
In June, IBM debuted ProtecTIER* deduplication solutions for AIX* and IBM i. ProtecTIER offers solutions to those who can’t complete backup operations in a given window, have difficulty protecting rapidly growing amounts of data or find their current backup infrastructure unreliable.
With data amounts growing, deduplication is becoming a vital part of data management, backup and recovery. “One of the reasons ProtecTIER is so crucial is because of the crazy growth the world is experiencing as it moves to an all-digital environment,” says Victor Nemechek, ProtecTIER deduplication offering manager at IBM. “Customers are finding their data often doubles or more every year and their current backup systems make it difficult to capture that data, protect it and restore it when they need to.”
For backups many companies use tapes that load data quickly, but present retrieval problems. These challenges—along with reliability problems—sent customers to disk where data was more accessible, but also expensive. Companies used disk for small portions of their most critical data, and kept their other data on tape. “Even with disk for critical data, backup is still an issue because you have a primary disk that you store your data on and you have to have that much disk to back up to, basically doubling your disk needs, and that can be very expensive,” Nemechek says.
“Deduplication can squeeze 25 terabytes of data down to only 1 terabyte of physical disk, so customers can have the speed and reliability of disk but without that one-to-one cost.” —Victor Nemechek, ProtecTIER deduplication offering manager, IBM
JeffHebert 060001UEQ2 Tags:  storage iaas enterprise range disk avaiable reliable scalabe mid 1,645 Views
JeffHebert 060001UEQ2 Tags:  range saas virtualize paas mid iaas reliable enterprise available scalable disk torage ibm cloud 1,625 Views
JeffHebert 060001UEQ2 Tags:  storage iaas paas saas enterprise swiching cloud networking 1,568 Views
JeffHebert 060001UEQ2 Tags:  san iaas paas cloud mid saas range nas enterprise storage 1,553 Views