JeffHebert 060001UEQ2 Tags:  storage disk saas relicable ibm virtual efficient cloud paas 7,069 Views
JeffHebert 060001UEQ2 Tags:  storage vm saas iaas daas cloud optimization virtual hard disk 7,272 Views
Cloud security: the grand challenge
In addition to the usual challenges of developing secure IT systems, cloud computing
presents an added level of risk because essential services are often outsourced to a third
party. The externalized aspect of outsourcing makes it harder to maintain data integrity and
privacy, support data and service availability, and demonstrate compliance.
In effect, cloud computing shifts much of the control over data and operations from the client
organization to their cloud providers, much in the same way organizations entrust part of their
IT operations to outsourcing companies. Even basic tasks, such as applying patches and
configuring firewalls, can become the responsibility of the cloud service provider, not the user.
This means that clients must establish trust relationships with their providers and understand
the risk in terms of how these providers implement, deploy, and manage security on their
behalf. This trust but verify relationship between cloud service providers and consumers is
critical because the cloud service consumer is still ultimately responsible for compliance and
protection of their critical data, even if that workload had moved to the cloud. In fact, some
organizations choose private or hybrid models over public clouds because of the risks
associated with outsourcing services.
Other aspects about cloud computing also require a major reassessment of security and risk.
Inside the cloud, it is difficult to physically locate where data is stored. Security processes that
were once visible are now hidden behind layers of abstraction. This lack of visibility can create
a number of security and compliance issues.
In addition, the massive sharing of infrastructure with cloud computing creates a significant
difference between cloud security and security in more traditional IT environments. Users
spanning different corporations and trust levels often interact with the same set of computing
resources. At the same time, workload balancing, changing service level agreements, and
other aspects of today's dynamic IT environments create even more opportunities for
misconfiguration, data compromise, and malicious conduct.
Infrastructure sharing calls for a high degree of standardized and process automation, which
can help improve security by eliminating the risk of operator error and oversight. However, the
risks inherent with a massively shared infrastructure mean that cloud computing models must
still place a strong emphasis on isolation, identity, and compliance.
Cloud computing is available in several service models (and hybrids of these models). Each
presents different levels of responsibility for security management. Figure 1 on page 3 depicts
the different cloud computing models. READ MORE>
JeffHebert 060001UEQ2 Tags:  storage saas cloud planning capacity management iaas paas 5,878 Views
JeffHebert 060001UEQ2 Tags:  virtualize ssd drive storage daas information hard technology consolidate 7,942 Views
Viewed 19817 times | Community Rating: 3.5
Originating Author: Wikibon Daemon
This paper was written and submitted by NetApp and is being republished with permission.
Flexible Choices to Optimize Performance
November 2008 | WP-7061-1008
Solid state drives (SSDs) based on flash memory are generating a lot of excitement. This enthusiasm is warranted because flash SSDs demonstrate latencies that are at least 10 times lower than the fastest hard disk drives (HDDs), often enabling response times more than 10X faster. For random read workloads, SSDs may deliver the I/O throughput of 30 or more HDDs while consuming significantly less power per disk. The performance of SSDscan reduce the number of fast-spinning hard disk drives you need in a storage system.Fewer disk drives translates into significant savings of power, cooling, and data center space. This performance benefit comes at a premium; flash SSDs are far more expensive per gigabyte of capacity than HDDs. Therefore SSDs are best applied in situations that require the highest performance.
The underlying flash memory technology used by SSDs has many advantages, particularly in comparison to DRAM. In addition to storage persistence, these advantages include higher density, lower power consumption, and lower cost per gigabyte. Because of these unique characteristics, NetApp is focusing on the targeted use of flash memory in storage systems and within your storage infrastructure in ways that can deliver the most performance acceleration for the minimum investment.
We are implementing flash memory solutions using SSDs for persistent storage, and we will also use flash memory directly to create expanded read caching devices. Caching can deliver performance that is comparable to or better than SSDs. Because you can complement a large amount of hard disk capacity with a relatively modest amount of read cache, caching is more cost effective for typical enterprise applications. As a result, more people can benefit from the performance acceleration achievable with flash technology.
You get even more flexibility and value from flash technology by combining it with the NetApp® unified storage architecture, which enables you to leverage your investment in flash memory to simultaneously accelerate multiple applications, whether they use SAN or NAS. Storage efficiency features such as deduplication for primary storage further increase your power, cooling, and space savings.
This white paper is an overview of NetApp’s plan to deliver SSDs (both native and virtualized arrays) plus flash-based read caching and of our ability to further leverage both of these technologies in caching architectures. Selection guidelines are provided to help you choose the right technology to reduce latency and increase your transaction rate while taking into consideration cost versus benefit.
JeffHebert 060001UEQ2 Tags:  enterprise compression time storage san de-duplication nas real 6,720 Views
Originating Author: David Vellante
Co-author: David Floyer
Tip: ctrl +/- to increase/decrease text size
There has been significant discussion in the industry about storage optimization and making better use of storage capacity. A number of storage vendors have successfully marketed data de-duplication for offline/backup applications, reducing the volume of backup data by a factor of 5-15:1, according to Wikibon user input.
Data de-duplication as applied to backup use cases is different from compression in that compression actually changes the data using algorithms to create a computational byproduct and write fewer bits. With de-duplication, data is not changed, rather copies 2-N are deleted and pointers are inserted to a 'master' instance of the data. Single-instancing can be thought of as synonymous with de-duplication.
Traditional data de-duplication technologies however are generally unsuitable for online or primary storage applications because the overheads associated with the algorithms required to de-duplicate data will unacceptably elongate response times. As an example, popular data de-duplication solutions such as those from Data Domain, ProtecTier (Diligent/IBM), Falconstor and EMC/Avamar are not used for reducing capacities of online storage.
There are three primary approaches to optimizing online storage, reducing capacity requirements and improving overall storage efficiencies. Generally, Wikibon refers to these in the broad category of on-line or primary data compression, although the industry will often use terms like de-duplication (e.g. NetApp A-SIS) and single instancing. These data reduction technologies are illustrated by the following types of solutions:
Unlike some data reduction solutions for backup, these three approaches use lossless data compression algorithms, meaning mathematically, bits can always be reconstructed.
Each of these approaches has certain benefits and drawbacks. The obvious benefit is reduced storage costs. However each solution places another technology layer in the network and increases complexity and risk.
Array-based data reduction
Array-based data reduction technologies such as A-SIS operate in-line as data is being written to reduce primary storage capacity. The de-duplication feature of WAFL (NetApp’s Write Anywhere File Layout) allows the identification of duplicates of a 4K block at write time (creating a weak 32-bit digital signature of the 4K block, which is then compared bit-by-bit to ensure that there is no hash collision) and placed into a signature file in the metadata. The work of identifying the duplicates is similar to the snap technology and is done in the background if controller resources are sufficient. The default is once every 24 hours and every time the percentage of changes reaches 20%.
In addition, there are three main disadvantages of an A-SIS solution, including:
IT Managers should note that A-SIS is included as a no-charge standard offering within NetApp's Nearline component of ONTAP, the company's storage OS.
Host-managed offline data compression solutions
Ocarina is an example of a host-managed data reduction offering or what it calls 'split-path.' It consists of an offline process that reads files through an appliance, compresses those files and writes them back to disk. When a file is requested, another appliance re-hydrates data and delivers it to the application. The advantage of this approach is much higher levels of compression because the process is offline and uses many more robust algorithms. A reasonable planning assumption of reduction ratios will range from 3-6:1 and sometimes higher for initial ingestion and read-only Web environments. However, because of the need to re-hydrate when new data is written, classical production environments may see lower ratios.
In the case of Ocarina, the company has developed proprietary algorithms that can improve reduction ratios on many existing file types (e.g. jpeg, pdf, mpeg, etc), which is unique in the industry.
The main drawbacks of host-managed data reduction solutions are:
On balance, solutions such as Ocarina are highly suitable and cost-effective for infrequently accessed data and read-intensive applications. High update environments should be avoided.
In-line data compression
IBM Real-time Compression offers in-line data compression whereby a device sits between servers and the storage network (see Shopzilla's architecture). Wikibon members indicate a compression ratio of 1.5-2:1 is a reasonable rule-of-thumb.
The main advantage of the IBM Real-time Compression approach is very low latency (i.e. microseconds) and improved performance. Storage performance is improved because compression occurs before data hits the storage network. As a result, all data in the storage network is compressed, meaning less data is sent through the SAN, cache, internal array, and disk devices, minimizing resource requirements and backup windows by 40% or more, according to Wikibon estimates.
There are two main drawbacks of the IBM Real-time Compression approach, including:
On balance, the advantages of an Ocarina or IBM Real-time Compression approach are they can be applied to any file-based storage (i.e. heterogeneous devices). NetApp and other array-based solutions lock customers into a particular storage vendor but have certain advantages as well. For example, they are simpler to implement because they are already integrated.
An Ocarina approach is best applied in read-intensive environments where it will achieve better reduction ratios due to its post-process/batch ingestion methodology. IBM Real-time Compression will achieve the highest levels of compression and ROI in general purpose enterprise data centers of 30TB's or greater.
Action Item: On-line data reduction is rapidly coming to mainstream storage devices in your neighborhood. Storage executives should familiarize themselves with the various technologies in this space and demand that storage vendors apply capacity optimization techniques to control storage costs.
Footnotes: RELATED RESEARCH
JeffHebert 060001UEQ2 Tags:  management paas virtualization iaas information storage cloud data saas 7,023 Views
JeffHebert 060001UEQ2 Tags:  san iaas cloud paas mid saas range nas enterprise storage 6,331 Views
OctobeIBM Storwize V7000 Unified Disk System The most powerful and easy-to-use innovative disk system in the storage marketplacer 14, 2011 5:54 PM
JeffHebert 060001UEQ2 Tags:  storage san enterprise disk fc nas ibm ssd midrange fibre 6,690 Views
JeffHebert 060001UEQ2 Tags:  video media ibm marketing compression it backup ntap justin.tv storage emc technology social real-time cloud data 6,275 Views
JeffHebert 060001UEQ2 Tags:  ibm ntap real-time storage cloud marketing video backup compression data emc it technology justin.tv media social 6,973 Views
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they were, larger than life. The SiliconAngle / Wikibon Cube broadcasting live from VMworld2011. Guests that were on the cube included, Tom Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson (VMware) as well as many more. The Cube also had 12 Industry Spotlights. The most interesting spotlight had to do with Storage Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV live over the internet, the Cube has broadcast from a number industry shows and user conferences. The great part about this, it is like the ability to watch a sporting event being covered by ESPN but for tech. The Cube brings all of the highlights of these events right into your computer screen. Now if you can’t make an event, no problem, you can catch all the most important messages from the Cube. The Cube is now the new mechanism for delivering content to users in the way they want to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes Dave Vellante and his co-host John Furrier tee up the concept. They discussed storage optimization, where it has come and were it is going, especially in VMware environments. We are hearing more and more about storage efficiency technologies. During the next 15 minutes Dave and I discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression technology is not only the most efficient and effective compression technology in the industry; we also learned that IBM really acquired not just a real-time “compression” technology but a platform that can do a number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a VMware environment due to the fact that virtualizing servers was successful for the server side of the house but it didn’t do all it set out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage side of the house. Users have told us that when they virtualize their servers, storage grows as much as 4x. By leveraging the right storage optimization technologies together, users can get their budgets back under control and also deliver the promise that server virtualization set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley (HPsisyphus, formally @3ParFarley) Dave asked us what was the most interesting thing we saw on the show floor while walking around. I didn’t hesitate in my response. There were two in my mind. First, it couldn’t be any more obvious at how fast data is growing. Over 50% of the 19,000 people there had cameras taking pictures and taking video. That data is going to be stored somewhere. Additionally, they had these cameras for a reason. Either we have more bloggers and tweeters than we know about, more marketing people are going to these events or more people are using social media to inform and educate others. The way in which users want to receive data is always changing and evolving, and at least at VMworld 2011 we were delivering content in a number of ways especially photos and video. All that data will end up in the “cloud” somewhere.
The second thing I noticed was the amount of free time VMware has given back to the IT user. I heard, on more than one occasion, end users talking about family, vacations and travel instead of the usual banter about how challenging their jobs are and the issues they have with their vendors which is the normal think I hear at these shows. This was not an anomaly. I am chalking it up to the fact that VMware makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE most interesting thing I saw at the show. Now Proximal Data just came out of “stealth” in early August. They didn’t have a booth at VMworld but they did have a “whisper suite”. So, I have to confess, since I used to be an analyst, sometimes people will ask me to come take a look at their technology and their message to see if it is in line with what is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and growing topic with VMware these days, the I/O bottle neck on virtual servers, and they solve this problem in a very unique and intelligent way.
First, the problem. One of the issues facing VMware today is the number of virtual machines that can be hosted by one physical machine. The more users can get on one system, the more efficient they can be. The problem is, today systems are running into I/O workload bottlenecks that are causing a limitation in the number of virtual machines one system can run.
One way to solve this problem is add more memory to the host but that could be very very expensive. You can add more HBA’s or NIC cards but that can be expensive and also difficult to manage. You can add more flash cache to your storage to improve the I/O bottleneck but doing that only solves ½ the problem, you still need to solve the challenge on the host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management software capabilities combined with PCI flash cards on the host, for a very reasonable price per host. The software combined with the card is 100% transparent to both the virtual servers and to the storage, which to me is one of the most important features of the implementation. Transparency is the key to any new technology. IT has a ton of challenges and has done a great deal of work to get their environment to where it is today. To implement a technology that causes all of that work to be undone is very painful. Remember, the hardest thing to change in IT is process, not technology. It’s important to preserve the process. That is what Proximal Data does. Proximal Data can increase the I/O capability of a VMware server with just a 5 minute installation of the PCI card and their software. This technology can double and even triple the number of virtual machines on any physical server and that is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however Proximal does it transparently with no agents making it the most user friendly implementation. While these guys won’t have product until 2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are talking more and more as well as investing in public / private cloud. There are more systems popping up that have servers, networks, high availability and storage all in one floor tile. These systems are designed to integrate, scale, manage VM’s simply, increase productivity and ease the management of all possible application deployments in any business. Additionally these boxes help you to connect to the cloud to ease the cost burden. Is the pendulum swinging back to the “open systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a potential OEM prospect of the IBM Real-time Compression IP. I have always said that this technology could revolutionize the data storage business much like VxVM did for Veritas many years ago. Creating a standard way to do compression across a number of system can help users with implementation as well as ease the storage cost burden. I hope this moves forward and I hope more folks step up who want to OEM the technology.
JeffHebert 060001UEQ2 Tags:  backup real-time replication data deduplication ibm technology storwize business archive virtualization compression cloud storage it 6,873 Views
by Steve Kenniston The first city on my Eastern European trip was Moscow. I think the traffic here is worse than the 101 in Silicon Valley during the dot com era. That said, it was a great visit. I spoke at the Information Infrastructure Conference at the Swissotel convention center in Moscow. It was the first time I spoke to a group of people with an interpreter. It was like being at the UN. The two main topics were Storage Efficiency and Real-time Compression.
I spoke with a few customers and the press and in dealing with the data growth challenges they wanted to know, “When it comes to big data, what is next, is it ‘huge data’”? Data growth clearly a concern. Interesting enough though most of the questions, came around my title of “Evangelist”. One report told me, “if an Evangelist is ‘preaching the word of storage’ then why not just call yourself an Apostle”? How do you think that would look on an IBM business card: Global Storage Efficiency Apostle?
The next day I did a day of “sales enablement” in the Moscow office. We discussed mostly how to sell and position Real-time Compression and what is next for the technology. I was very impressed with the team. They were very technical and knew quite a bit about Real-time Compression and really wanted to know in more detail how the technology was invented. This means they are really talking about the technology and the customers are drilling down into the next level of detail. There are a lot of good opportunities for the technology in Moscow and I look forward to hearing more about the success of Real-time Compression there.
I didn’t have a lot of time to sight see but I did make it to Red Square. You can actually buy a beer outside in Red Square and walk around. So I did. I took a few photos and then as the US was getting going, I had some work calls to attend to. That evening I spent on the 34th floor of my hotel having dinner. It was a great view of Moscow. I hope to come back.
Next stop, Warsaw Poland. Stay tuned.
JeffHebert 060001UEQ2 Tags:  compression data business storwize recove ibm technology recovery replication deduplication it diaster storage backup cloud real-time virtualizationbackup 6,860 Views
by Steve Kenniston After landing in Warsaw, I got into a car with the local sales leader for Poland and we drove to the event location. It was a 2 hour drive. First, the roads and the land in Poland reminded me very much of my home time in Maine. Very scenic and rural but beautiful and peaceful. We talked storage for 2 hours and I am always festinated at the thirst for knowledge there is when I travel. It was a great ride followed up by a customer reception and some local Polish brew.
Thursday I spent the day in Sterdyn, Poland for IBM Storage University. There were 30 customers at the event and it went very very well. The event was at Palac Ossolinski, today used as an event center but has a very rich history, in fact at one point it was used as a medical facility in WWII. The photo is of the building where we had the event. The topics we covered were:
The customers were very interactive and provided a lot of insight to their environments. Interestingly enough I learned during our customer reception that IBM storage is #1 in Poland with HP second and EMC third. This is a true testament to the IBM sellers and the customers who use the IBM products every day to drive their business. I also learned that the data break down in Poland is 90% block, 10% file which I found interesting and would be interested to check back 12 months from today to see how it will be different.
I did learn something very interesting in Poland. The question was asked “Why XIV”? What is so special about XIV. The answer was awesome. The answer started with 2 questions:
1) How old is RAID?
2) How old is your iPhone?
The reality is data growth is out pacing what traditional RAID can handle and data profiles are changing as well. These combined have driven new technologies like Cleversafe, Cloud Computing, Hadoop and XIV. Just like the iPhone is a new approach to the smart phone based on new things we know about how these smart phones are being used, we know more about how data and storage is being used. New ways to deliver capacity and performance are needed in order to keep up with the changing times. I thought it was a very good answer in terms that make customers think.
Thursday evening I traveled back to Warsaw where I got in a bit late and just went to a local pub, Sketch. Grabbed a small bite and some local mead and then headed back to the hotel. I did get to see the local Palace of Culture and Science in the middle of Warsaw, very impressive, built as a gift from Russia to Poland.
I have an early flight to Prague. I am very excited about this part of the journey as I have always wanted to travel to Prague. Press meeting right when I land. Stay tuned.
JeffHebert 060001UEQ2 Tags:  cloud technology compression ibm business storwize storage virtualization data deduplication replication real-time it 6,207 Views
by Steve Kenniston Alright, landed safe in Prague and was picked up by one of my colleagues and whisked away to the IBM office. There we did an interview with Czech writer Martin Noska from Computerworld for IDG in Czech Republic. The first Noska informed me was that IBM is the number one in storage sales in Czech Republic (just like Poland!). He also had some very good questions and he with “What are IBM’s biggest challenges in the storage business”? I had thought about this for a while and I would have to say it is really about marketing our storage “solutions” to the customer base. IBM is a double edge sword. IBM is so big and has so many products it becomes difficult to market or message all of our products without inundating all of our customers and confusing them. If you think about it, IBM has hundreds of thousands of customers and business partners, if not more. This is one of our strengths. When customers have needs or requirements we have very good input into our product portfolio, perhaps the best in the business. Combine this with the fact that IBM has not only storage solutions but technology across the entire stack from servers to networking. So when it comes to developing the right technology, that solves real customer problems, I would argue that IBM’s portfolio is the best in the business. IBM takes an extreme amount of care when developing a solution to ensure that it matches the customer requirements based on the changing needs of IT. Having an integrated portfolio that works well with our ISV partners, VMware for example, allows us to help customers speed their time to ROI and be very competitive in the market place. The challenge is, how do we properly message our new solutions to our customers, in a timely manner so that they are well aware of new products without giving them too much information such that it just becomes noise? It is difficult to say the least.
The interview went very well. There were questions about tape, where we discussed the advantages of IBM’s LFTS technology for more advanced tape usage, we discussed the direction data deduplication will go as well. Noska’s view was that there hadn’t been any advancement in data deduplication in the last 5 years. I told him that for secondary storage, backup, that he is right, I also told him that the real advancement to deduplication will come when it is ready for primary storage. Today deduplication isn’t ready for primary, but it will be soon.
On Monday the 13th we traveled to visit Avnet. They are a great IBM partner. Like most partners they have a very large SMB install base and also like a lot of SMB feedback I have been getting, they are looking for a building block solution that has all of the software features implemented as a part of the stack. SMB and Enterprise alike are starting to realize that the value in any array is becoming the software stack that makes the hardware, efficient, optimized, flexible, and dynamic. IT’s job continues to get more and more challenging with developing strategic initiatives for the business to make them more competitive and it is the job of the vendor to make sure these solutions are as optimized and cost effective as possible.
We also visited DHL. These guys have one of the greatest datacenters I have ever visited. They are very advanced and push a lot of data. The do some very strategic logistics for a number of companies in Europe and Asia. They, like many others have a number of challenges. Since my blog post about “The 5 Most Interesting things at VMworld” (#4) I heard something very interesting today. I asked “What is your most challenging storage issue”? He told me that storage was not is “most difficult” challenge. Storage efficiency was important to him in order to keep driving down costs for his organization as they deliver a service to the different groups that make up DHL, but his most difficult challenge was with server I/O in his VMware environment. If you read #4 in my post, regarding Proximal Data, this is exactly the issue the address. As VM instances grow on the physical servers, the I/O starts to become the big problem. DHL runs over 4000 instances of VMware and as the business demands more applications and application resources, they are bound by the I/O of the server, which also causes them to WAY over provision their storage for performance reasons. This is very time consuming, management intensive and expensive. The combination of a solution like Proximal Data as well as compression can help them optimize their infrastructure to save money and deliver better, more cost effective services to their lines of business.
On the lighter side, I spend the weekend in Prague. What an amazing city. The weather was fantastic and I was able to take a lot of great photos. I walked around Prague Castle, ate some authentic Czech food, visited the memorial for the Czech hockey players that passed in the Russian plane crash and met some pretty interesting people. You can check out some of my photos of Prague at www.facebook.com/skenniston. Coincidentally the photo above shows the "Golden Lane" where the Alchemists worked to turn anything they could find into gold in the city of Prague.