JeffHebert 060001UEQ2 Tags:  storage backup compression replication marketing it technology 29 emc ibm august social justin.tv data ntap no deduplication comme business cloud diaster real-time 2011 virtualization media video on recovery with 2,070 Views
JeffHebert 060001UEQ2 Tags:  enterprise compression time storage san de-duplication nas real 2,006 Views
Originating Author: David Vellante
Co-author: David Floyer
Tip: ctrl +/- to increase/decrease text size
There has been significant discussion in the industry about storage optimization and making better use of storage capacity. A number of storage vendors have successfully marketed data de-duplication for offline/backup applications, reducing the volume of backup data by a factor of 5-15:1, according to Wikibon user input.
Data de-duplication as applied to backup use cases is different from compression in that compression actually changes the data using algorithms to create a computational byproduct and write fewer bits. With de-duplication, data is not changed, rather copies 2-N are deleted and pointers are inserted to a 'master' instance of the data. Single-instancing can be thought of as synonymous with de-duplication.
Traditional data de-duplication technologies however are generally unsuitable for online or primary storage applications because the overheads associated with the algorithms required to de-duplicate data will unacceptably elongate response times. As an example, popular data de-duplication solutions such as those from Data Domain, ProtecTier (Diligent/IBM), Falconstor and EMC/Avamar are not used for reducing capacities of online storage.
There are three primary approaches to optimizing online storage, reducing capacity requirements and improving overall storage efficiencies. Generally, Wikibon refers to these in the broad category of on-line or primary data compression, although the industry will often use terms like de-duplication (e.g. NetApp A-SIS) and single instancing. These data reduction technologies are illustrated by the following types of solutions:
Unlike some data reduction solutions for backup, these three approaches use lossless data compression algorithms, meaning mathematically, bits can always be reconstructed.
Each of these approaches has certain benefits and drawbacks. The obvious benefit is reduced storage costs. However each solution places another technology layer in the network and increases complexity and risk.
Array-based data reduction
Array-based data reduction technologies such as A-SIS operate in-line as data is being written to reduce primary storage capacity. The de-duplication feature of WAFL (NetApp’s Write Anywhere File Layout) allows the identification of duplicates of a 4K block at write time (creating a weak 32-bit digital signature of the 4K block, which is then compared bit-by-bit to ensure that there is no hash collision) and placed into a signature file in the metadata. The work of identifying the duplicates is similar to the snap technology and is done in the background if controller resources are sufficient. The default is once every 24 hours and every time the percentage of changes reaches 20%.
In addition, there are three main disadvantages of an A-SIS solution, including:
IT Managers should note that A-SIS is included as a no-charge standard offering within NetApp's Nearline component of ONTAP, the company's storage OS.
Host-managed offline data compression solutions
Ocarina is an example of a host-managed data reduction offering or what it calls 'split-path.' It consists of an offline process that reads files through an appliance, compresses those files and writes them back to disk. When a file is requested, another appliance re-hydrates data and delivers it to the application. The advantage of this approach is much higher levels of compression because the process is offline and uses many more robust algorithms. A reasonable planning assumption of reduction ratios will range from 3-6:1 and sometimes higher for initial ingestion and read-only Web environments. However, because of the need to re-hydrate when new data is written, classical production environments may see lower ratios.
In the case of Ocarina, the company has developed proprietary algorithms that can improve reduction ratios on many existing file types (e.g. jpeg, pdf, mpeg, etc), which is unique in the industry.
The main drawbacks of host-managed data reduction solutions are:
On balance, solutions such as Ocarina are highly suitable and cost-effective for infrequently accessed data and read-intensive applications. High update environments should be avoided.
In-line data compression
IBM Real-time Compression offers in-line data compression whereby a device sits between servers and the storage network (see Shopzilla's architecture). Wikibon members indicate a compression ratio of 1.5-2:1 is a reasonable rule-of-thumb.
The main advantage of the IBM Real-time Compression approach is very low latency (i.e. microseconds) and improved performance. Storage performance is improved because compression occurs before data hits the storage network. As a result, all data in the storage network is compressed, meaning less data is sent through the SAN, cache, internal array, and disk devices, minimizing resource requirements and backup windows by 40% or more, according to Wikibon estimates.
There are two main drawbacks of the IBM Real-time Compression approach, including:
On balance, the advantages of an Ocarina or IBM Real-time Compression approach are they can be applied to any file-based storage (i.e. heterogeneous devices). NetApp and other array-based solutions lock customers into a particular storage vendor but have certain advantages as well. For example, they are simpler to implement because they are already integrated.
An Ocarina approach is best applied in read-intensive environments where it will achieve better reduction ratios due to its post-process/batch ingestion methodology. IBM Real-time Compression will achieve the highest levels of compression and ROI in general purpose enterprise data centers of 30TB's or greater.
Action Item: On-line data reduction is rapidly coming to mainstream storage devices in your neighborhood. Storage executives should familiarize themselves with the various technologies in this space and demand that storage vendors apply capacity optimization techniques to control storage costs.
Footnotes: RELATED RESEARCH
JeffHebert 060001UEQ2 Tags:  ibm ntap real-time storage cloud marketing video backup compression data emc it technology justin.tv media social 1,985 Views
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they were, larger than life. The SiliconAngle / Wikibon Cube broadcasting live from VMworld2011. Guests that were on the cube included, Tom Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson (VMware) as well as many more. The Cube also had 12 Industry Spotlights. The most interesting spotlight had to do with Storage Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV live over the internet, the Cube has broadcast from a number industry shows and user conferences. The great part about this, it is like the ability to watch a sporting event being covered by ESPN but for tech. The Cube brings all of the highlights of these events right into your computer screen. Now if you can’t make an event, no problem, you can catch all the most important messages from the Cube. The Cube is now the new mechanism for delivering content to users in the way they want to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes Dave Vellante and his co-host John Furrier tee up the concept. They discussed storage optimization, where it has come and were it is going, especially in VMware environments. We are hearing more and more about storage efficiency technologies. During the next 15 minutes Dave and I discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression technology is not only the most efficient and effective compression technology in the industry; we also learned that IBM really acquired not just a real-time “compression” technology but a platform that can do a number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a VMware environment due to the fact that virtualizing servers was successful for the server side of the house but it didn’t do all it set out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage side of the house. Users have told us that when they virtualize their servers, storage grows as much as 4x. By leveraging the right storage optimization technologies together, users can get their budgets back under control and also deliver the promise that server virtualization set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley (HPsisyphus, formally @3ParFarley) Dave asked us what was the most interesting thing we saw on the show floor while walking around. I didn’t hesitate in my response. There were two in my mind. First, it couldn’t be any more obvious at how fast data is growing. Over 50% of the 19,000 people there had cameras taking pictures and taking video. That data is going to be stored somewhere. Additionally, they had these cameras for a reason. Either we have more bloggers and tweeters than we know about, more marketing people are going to these events or more people are using social media to inform and educate others. The way in which users want to receive data is always changing and evolving, and at least at VMworld 2011 we were delivering content in a number of ways especially photos and video. All that data will end up in the “cloud” somewhere.
The second thing I noticed was the amount of free time VMware has given back to the IT user. I heard, on more than one occasion, end users talking about family, vacations and travel instead of the usual banter about how challenging their jobs are and the issues they have with their vendors which is the normal think I hear at these shows. This was not an anomaly. I am chalking it up to the fact that VMware makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE most interesting thing I saw at the show. Now Proximal Data just came out of “stealth” in early August. They didn’t have a booth at VMworld but they did have a “whisper suite”. So, I have to confess, since I used to be an analyst, sometimes people will ask me to come take a look at their technology and their message to see if it is in line with what is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and growing topic with VMware these days, the I/O bottle neck on virtual servers, and they solve this problem in a very unique and intelligent way.
First, the problem. One of the issues facing VMware today is the number of virtual machines that can be hosted by one physical machine. The more users can get on one system, the more efficient they can be. The problem is, today systems are running into I/O workload bottlenecks that are causing a limitation in the number of virtual machines one system can run.
One way to solve this problem is add more memory to the host but that could be very very expensive. You can add more HBA’s or NIC cards but that can be expensive and also difficult to manage. You can add more flash cache to your storage to improve the I/O bottleneck but doing that only solves ½ the problem, you still need to solve the challenge on the host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management software capabilities combined with PCI flash cards on the host, for a very reasonable price per host. The software combined with the card is 100% transparent to both the virtual servers and to the storage, which to me is one of the most important features of the implementation. Transparency is the key to any new technology. IT has a ton of challenges and has done a great deal of work to get their environment to where it is today. To implement a technology that causes all of that work to be undone is very painful. Remember, the hardest thing to change in IT is process, not technology. It’s important to preserve the process. That is what Proximal Data does. Proximal Data can increase the I/O capability of a VMware server with just a 5 minute installation of the PCI card and their software. This technology can double and even triple the number of virtual machines on any physical server and that is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however Proximal does it transparently with no agents making it the most user friendly implementation. While these guys won’t have product until 2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are talking more and more as well as investing in public / private cloud. There are more systems popping up that have servers, networks, high availability and storage all in one floor tile. These systems are designed to integrate, scale, manage VM’s simply, increase productivity and ease the management of all possible application deployments in any business. Additionally these boxes help you to connect to the cloud to ease the cost burden. Is the pendulum swinging back to the “open systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a potential OEM prospect of the IBM Real-time Compression IP. I have always said that this technology could revolutionize the data storage business much like VxVM did for Veritas many years ago. Creating a standard way to do compression across a number of system can help users with implementation as well as ease the storage cost burden. I hope this moves forward and I hope more folks step up who want to OEM the technology.
JeffHebert 060001UEQ2 Tags:  marketing ibm media video compression it ntap backup justin.tv storage emc technology social real-time cloud data 1,948 Views
More Storage Goodies
A quick summary of the latest announcements by Nick HarrisIn the cover story this month, Lee Cleveland, Distinguished Engineer, Power Systems direct attach storage, and Andy Walls, Distinguished Engineer, chief hardware architect for DS8000 and solid-state drives (SSDs), sat down to talk about all of the new storage technologies IBM has been releasing lately. What I didn’t have room for in the article was a nice summary of the technologies that can help you improve access, manage growth, protect data, reduce costs or reduce complexity. Whatever your goals, IBM has an integrated storage option for every organization.
Here are the quick highlights of the latest storage announcements:
IBM Storwize V7000
New advanced software functions
Proven IBM software functionalities
IBM Storwize Rapid Application Storage Solution
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1 (with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
JeffHebert 060001UEQ2 Tags:  saas servers storage network virtualize internet iaas networking consolidate unified paas cloud 1,912 Views
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 days
Faced with a nasty loss of credibility, a string of poor financial results, shrinking market share in its core business, an unwieldy and alienating bureaucracy blamed for the top executive exodus it been experiencing, and a stock price that's plunged into the toilet Cisco, once an economic bellwether, is promising to do more than simply kill off its once-popular Flip video camcorder business and lay 550 people off, an admission that its foray into the consumer segment had largely failed.
It said in a press release issued Thursday morning that it's going to a "streamlined operating model" focused on five areas, not apparently the literally 30 different directions it's been going in although it did say, come to think of it, something about "greater focus" so maybe it's not really cutting back.
These focus areas are, it said, "routing, switching, and services; collaboration; data center virtualization and cloud; video; and architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street Journal criticized Cisco for not being able to explain in plain English what it's doing and Barron's complained that it needed a Kremlinologist to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales, services and engineering organizations in the next 120 days or by July 31 when its next fiscal year begins. Well, maybe not everything, it warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on making a series of changes throughout the next quarter and as we enter the new fiscal year that will make it easier to work for and with Cisco, as we focus our portfolio, simplify operations and manage expenses. Our five company priorities are for a reason - they are the five drivers of the future of the network, and they define what our customers know Cisco is uniquely able to provide for their business success. The new operating model will enable Cisco to execute on the significant market opportunities of the network and empower our sales, service and engineering organizations."
JeffHebert 060001UEQ2 Tags:  hp enterprise ibm efficient range effective emc mid storage performance 1,904 Views
"As the world becomes more interconnected, instrumented and intelligent, more and more information is created. This influx of information creates both challenges and opportunities. Companies must build smarter information infrastructures that can handle all of this information and manage it intelligently. IBM has invested billions of dollars developing smart storage solutions that embody a set of essential technologies: virtualization, thin provisioning, deduplication, compression and automated tiering that will enable you to manage the influx of information and unlock new business opportunities."
In many IT departments, increased user demand has led to haphazard storage growth, resulting in sprawling, heterogeneous storage environments. These environments make it difficult to achieve optimal utilization and to provision storage capacity for new users and applications. Storage virtualization can put an end to these problems. It enables companies to logically aggregate disk storage so capacity can be efficiently allocated across applications and users.
JeffHebert 060001UEQ2 Tags:  range performance tier storwize ibm mid storage enterprise 1,872 Views
"Since October 2010 IBM Corp. announced workload-optimized systems to help companies manage a range of more demanding workloads that are placing new stresses on already over-taxed data centers.
The offerings, which span IBM's systems portfolio, represent IBM's investment in systems integrated and optimized across chips, hardware and software, for a range of work at a time when companies face amounts of data and are under pressure to become more efficient in managing and drawing timely insights from the information.
The new systems include: A new offering for the zEnterprise BladeCenter Extension (zBX), IBM's systems design that allows workloads on mainframe servers and other select systems to share resources and be managed as a single, virtualized system; and key new Storage and System x products, which can bring new levels of efficiency to the data center."
JeffHebert 060001UEQ2 Tags:  compression data business storwize recove ibm technology recovery replication deduplication it diaster storage backup cloud real-time virtualizationbackup 1,855 Views
by Steve Kenniston After landing in Warsaw, I got into a car with the local sales leader for Poland and we drove to the event location. It was a 2 hour drive. First, the roads and the land in Poland reminded me very much of my home time in Maine. Very scenic and rural but beautiful and peaceful. We talked storage for 2 hours and I am always festinated at the thirst for knowledge there is when I travel. It was a great ride followed up by a customer reception and some local Polish brew.
Thursday I spent the day in Sterdyn, Poland for IBM Storage University. There were 30 customers at the event and it went very very well. The event was at Palac Ossolinski, today used as an event center but has a very rich history, in fact at one point it was used as a medical facility in WWII. The photo is of the building where we had the event. The topics we covered were:
The customers were very interactive and provided a lot of insight to their environments. Interestingly enough I learned during our customer reception that IBM storage is #1 in Poland with HP second and EMC third. This is a true testament to the IBM sellers and the customers who use the IBM products every day to drive their business. I also learned that the data break down in Poland is 90% block, 10% file which I found interesting and would be interested to check back 12 months from today to see how it will be different.
I did learn something very interesting in Poland. The question was asked “Why XIV”? What is so special about XIV. The answer was awesome. The answer started with 2 questions:
1) How old is RAID?
2) How old is your iPhone?
The reality is data growth is out pacing what traditional RAID can handle and data profiles are changing as well. These combined have driven new technologies like Cleversafe, Cloud Computing, Hadoop and XIV. Just like the iPhone is a new approach to the smart phone based on new things we know about how these smart phones are being used, we know more about how data and storage is being used. New ways to deliver capacity and performance are needed in order to keep up with the changing times. I thought it was a very good answer in terms that make customers think.
Thursday evening I traveled back to Warsaw where I got in a bit late and just went to a local pub, Sketch. Grabbed a small bite and some local mead and then headed back to the hotel. I did get to see the local Palace of Culture and Science in the middle of Warsaw, very impressive, built as a gift from Russia to Poland.
I have an early flight to Prague. I am very excited about this part of the journey as I have always wanted to travel to Prague. Press meeting right when I land. Stay tuned.
Storage Efficiency through Real-time Data Compression for the Entire Data Lifecycle
Agnostic to Applications and Storage
IBM Real-time Compression appliances reduce storage capacity utilization by up to 80% without performance degradation. IBM Real-time Compression appliances increase the capacity of existing storage infrastructure helping organizations meet the demands of rapid data growth while also enhancing storage performance and utilization. The result is unprecedented cost savings, ROI, operational and environmental efficiencies.
The IBM Real-time Compression appliances address data optimization on primary storage so your capacity is optimized across all tiers of storage. The IBM Real-time Compression Appliance STN6500 and STN6800 align to your existing storage networking configuration for easy installation. The appliances install transparently in front of your existing NAS storage and thru patented real-time compression reduces the size of every file created. Read more>
JeffHebert 060001UEQ2 Tags:  storage performance ibm emc high compression effective hp efficient 1,852 Views
See how Language Weaver has utilized IBM Real-time Compression and are getting 3 to 1 compression and the solution was totally transparent to their infrastructure.
JeffHebert 060001UEQ2 Tags:  backup real-time replication data deduplication ibm technology storwize business archive virtualization compression cloud storage it 1,824 Views
by Steve Kenniston The first city on my Eastern European trip was Moscow. I think the traffic here is worse than the 101 in Silicon Valley during the dot com era. That said, it was a great visit. I spoke at the Information Infrastructure Conference at the Swissotel convention center in Moscow. It was the first time I spoke to a group of people with an interpreter. It was like being at the UN. The two main topics were Storage Efficiency and Real-time Compression.
I spoke with a few customers and the press and in dealing with the data growth challenges they wanted to know, “When it comes to big data, what is next, is it ‘huge data’”? Data growth clearly a concern. Interesting enough though most of the questions, came around my title of “Evangelist”. One report told me, “if an Evangelist is ‘preaching the word of storage’ then why not just call yourself an Apostle”? How do you think that would look on an IBM business card: Global Storage Efficiency Apostle?
The next day I did a day of “sales enablement” in the Moscow office. We discussed mostly how to sell and position Real-time Compression and what is next for the technology. I was very impressed with the team. They were very technical and knew quite a bit about Real-time Compression and really wanted to know in more detail how the technology was invented. This means they are really talking about the technology and the customers are drilling down into the next level of detail. There are a lot of good opportunities for the technology in Moscow and I look forward to hearing more about the success of Real-time Compression there.
I didn’t have a lot of time to sight see but I did make it to Red Square. You can actually buy a beer outside in Red Square and walk around. So I did. I took a few photos and then as the US was getting going, I had some work calls to attend to. That evening I spent on the 34th floor of my hotel having dinner. It was a great view of Moscow. I hope to come back.
Next stop, Warsaw Poland. Stay tuned.
JeffHebert 060001UEQ2 Tags:  reliable technology available real-time storage compression business efficient 1,815 Views
JeffHebert 060001UEQ2 Tags:  iaas paas cloud brocade applications servers storage ibm unified converge saas networking 1,803 Views
FleetCor Selects Brocade to Provide Cloud-Optimized Network Services for 500,000 Commercial Accounts
JeffHebert 060001UEQ2 Tags:  storage infrastructure saas networking servers paas iaas cloud 1,799 Views
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 -- Brocade (NASDAQ: BRCD) today announced that FleetCor, a leading independent global provider of specialized payment products and services to businesses, commercial fleets, major oil companies, petroleum marketers and government fleets, has selected Brocade as the vendor to build its cloud-optimized network. This new network enhances FleetCor's ability to securely process millions of transactions monthly and ultimately better serve its commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor cardholders worldwide, and they are used to purchase billions of gallons of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help evolve its data center and IT operations into a more agile private cloud infrastructure. Brocade® cloud-optimized networks are designed to reduce network complexity while increasing performance and reliability. Brocade solutions for private cloud networking are purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we looked at market leadership and non-stop access to critical data," said Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade cloud-optimized networking solutions are perfect for our data centers because they allow us to optimize applications faster, virtually eliminate downtime and help us meet service level agreements for our customers. Moving to a cloud-based model also provides us the flexibility to make adjustments on the fly and access secure information virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data centers, citing scalability as a major driver for the purchase. This approach enables FleetCor to virtualize its geographically distributed data centers and leverage the equipment it already has, at the highest level, to achieve maximum return on investment. The Brocade MLXe provides additional benefits for FleetCor by using less power and has a smaller footprint than competitive routers; critical in power-and space-constrained locations in order to allow for growth. The Brocade MLXe also enables continuous business operation for FleetCor based on Multi-Chassis Trunking, massive scalability supporting highest 100 GbE density in the industry with no performance degradation for advanced features like IPv6 and flexible chassis options to meet network and business requirements.
The Brocade ServerIron ADX Series of high-performance application delivery switches provides FleetCor with a broad range of application optimization functions to help ensure the reliable delivery of critical applications. Purpose-built for large-scale, low-latency environments, these switches accelerate application performance, load-balance high volumes of data and improve application availability while making the most efficient use of the company's existing infrastructure. It also delivers dynamic application provisioning and de-provisioning for FleetCor's highly virtualized data center, enables seamless migration and translation to IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers FleetCor has eliminated thousands of costly networking cables, saving it hundreds of thousands of dollars and allowing the company to segment, streamline and secure its network. FleetCor has also been able to easily integrate Brocade network technology with third-party offerings already installed in the network, for complete investment protection. FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in each of our data centers to help us leverage the benefits of cloud computing and the Brocade MLXe delivered on all fronts," said Keirbeck. "By virtualizing our data center, Brocade allows for non-stop access to the mission-critical data that FleetCor and its customers rely on every day. We chose the Brocade MLXe because of the tremendous results we already saw from our existing Brocade solutions and the exceptional support and service."
According to a report from analyst firm Gartner, "Although 'economic
affordability' is an immediate, attractive benefit, the biggest
advantages (of cloud services) result from characteristics such as
built-in elasticity and scalability, reduced barriers to entry,
flexibility in service provisioning and agility in contracting."(1)