JeffHebert 060001UEQ2 Tags:  ibm tier consolidate storage easy storwiz v7000 disk virtualize 13,382 Views
SSD storage are clear, the cost is often prohibitive. But what if you can target the data that really needs the performance edge at the SSD drives? You could balance the cost against IT performance gains that truly help your business perform. Read this... Full Article at BNET
While the performance advantages of SSD storage are clear, the cost is often prohibitive. But what if you can target the data that really needs the performance edge at the SSD drives? You could balance the cost against IT performance gains that truly help your business perform. Read this brief from Mesabi Group to see how IBM Storwize� V7000 "users now have the tools � with the combination of Storage Tier Advisor and Easy Tier � to be able to plan for and use SSDs appropriately in their distinctive workload environments."
JeffHebert 060001UEQ2 Tags:  based cloud storage block saas nas file san hds iaas paas hp 8,388 Views
IBM® System Storage™ N series with Operations Manager software offers comprehensive monitoring and management for N series enterprise storage and content delivery environments. Operations Manager is designed to provide alerts, reports, and configuration tools from a central control point, helping you keep your storage and content delivery infrastructure in-line with business requirements for high availability and low total cost of ownership.
We focus especially on Protection Manager, which is designed as an intuitive backup and replication management software for IBM System Storage N series unified storage disk-based data protection environments. The application is designed to support data protection and help increase productivity with automated setup and policy-based management.
This IBM Redbooks® publication demonstrates how Operation Manager manages IBM System Storage N series storage from a single view and remotely from anywhere. Operations Manager can monitor and configure all distributed N series storage systems, N series gateways, and data management services to increase the availability and accessibility of their stored and cached data. Operations Manager can monitor the availability and capacity utilization of all its file systems regardless of where they are physically located. It can also analyze the performance utilization of its storage and content delivery network. It is available on Windows® , Linux® , and Solaris™ .
JeffHebert 060001UEQ2 Tags:  ssd disk range iaas mid storage enterprise ibm v7000 svc 8,034 Views
“Procedures for replacing or adding nodes to an existing cluster”
Scope and Objectives
The scope of this document is two fold. The first section provides a procedure for replacing existing nodes in a SVC cluster non-disruptively. For example, the current cluster consists of two 2145-8F4 nodes and the desire is to replace them with two 2145-CF8 nodes maintaining the cluster size at two nodes. The second section provides a procedure to add nodes to an existing cluster to expand the cluster to support additional workload. For example, the current cluster consists of two 2145-8G4 nodes and the desire is to grow it to a four node cluster by adding two 2145-CF8 nodes.
The objective of this document is to provide greater detail on the steps required to perform the above procedures then is currently available in the SVC Software Installation and Configuration Guide, SC23-6628, located at www.ibm.com/storage/support/2145. In addition, it provides important information to assist the person performing the procedures to avoid problems while following the various steps.
Section 1: Procedure to replace existing SVC nodes non-disruptively
You can replace SAN Volume Controller 2145-8F2, 2145-8F4, 2145-8G4, and 2145-8A4 nodes with SAN Volume Controller 2145-CF8 nodes in an existing, active cluster without taking an outage on the SVC or on your host applications. In fact you can use this procedure to replace any model node with a different model node as long as the SVC software level supports that particular node model type. For example, you might want to replace a 2145-8F2 node in a test environment with a 2145-8G4 node previously in production that just got replaced by a new 2145-CF8 node.
Note: If you are attempting to replace existing 2145-4F2 nodes with new 2145-CF8 nodes do not use this procedure as you must use the procedure specifically for this sort of upgrade located at the following URL:
This procedure does not require changes to your SAN environment because the new node being installed uses the same worldwide node name (WWNN) as the node you are replacing. Since SVC uses this to generate the unique worldwide port name (WWPN), no SAN zoning or disk controller LUN masking changes are required. READ MORE>
JeffHebert 060001UEQ2 Tags:  virtualize ssd drive storage daas information hard technology consolidate 7,967 Views
Viewed 19817 times | Community Rating: 3.5
Originating Author: Wikibon Daemon
This paper was written and submitted by NetApp and is being republished with permission.
Flexible Choices to Optimize Performance
November 2008 | WP-7061-1008
Solid state drives (SSDs) based on flash memory are generating a lot of excitement. This enthusiasm is warranted because flash SSDs demonstrate latencies that are at least 10 times lower than the fastest hard disk drives (HDDs), often enabling response times more than 10X faster. For random read workloads, SSDs may deliver the I/O throughput of 30 or more HDDs while consuming significantly less power per disk. The performance of SSDscan reduce the number of fast-spinning hard disk drives you need in a storage system.Fewer disk drives translates into significant savings of power, cooling, and data center space. This performance benefit comes at a premium; flash SSDs are far more expensive per gigabyte of capacity than HDDs. Therefore SSDs are best applied in situations that require the highest performance.
The underlying flash memory technology used by SSDs has many advantages, particularly in comparison to DRAM. In addition to storage persistence, these advantages include higher density, lower power consumption, and lower cost per gigabyte. Because of these unique characteristics, NetApp is focusing on the targeted use of flash memory in storage systems and within your storage infrastructure in ways that can deliver the most performance acceleration for the minimum investment.
We are implementing flash memory solutions using SSDs for persistent storage, and we will also use flash memory directly to create expanded read caching devices. Caching can deliver performance that is comparable to or better than SSDs. Because you can complement a large amount of hard disk capacity with a relatively modest amount of read cache, caching is more cost effective for typical enterprise applications. As a result, more people can benefit from the performance acceleration achievable with flash technology.
You get even more flexibility and value from flash technology by combining it with the NetApp® unified storage architecture, which enables you to leverage your investment in flash memory to simultaneously accelerate multiple applications, whether they use SAN or NAS. Storage efficiency features such as deduplication for primary storage further increase your power, cooling, and space savings.
This white paper is an overview of NetApp’s plan to deliver SSDs (both native and virtualized arrays) plus flash-based read caching and of our ability to further leverage both of these technologies in caching architectures. Selection guidelines are provided to help you choose the right technology to reduce latency and increase your transaction rate while taking into consideration cost versus benefit.
JeffHebert 060001UEQ2 Tags:  storage vm saas iaas daas cloud optimization virtual hard disk 7,294 Views
Cloud security: the grand challenge
In addition to the usual challenges of developing secure IT systems, cloud computing
presents an added level of risk because essential services are often outsourced to a third
party. The externalized aspect of outsourcing makes it harder to maintain data integrity and
privacy, support data and service availability, and demonstrate compliance.
In effect, cloud computing shifts much of the control over data and operations from the client
organization to their cloud providers, much in the same way organizations entrust part of their
IT operations to outsourcing companies. Even basic tasks, such as applying patches and
configuring firewalls, can become the responsibility of the cloud service provider, not the user.
This means that clients must establish trust relationships with their providers and understand
the risk in terms of how these providers implement, deploy, and manage security on their
behalf. This trust but verify relationship between cloud service providers and consumers is
critical because the cloud service consumer is still ultimately responsible for compliance and
protection of their critical data, even if that workload had moved to the cloud. In fact, some
organizations choose private or hybrid models over public clouds because of the risks
associated with outsourcing services.
Other aspects about cloud computing also require a major reassessment of security and risk.
Inside the cloud, it is difficult to physically locate where data is stored. Security processes that
were once visible are now hidden behind layers of abstraction. This lack of visibility can create
a number of security and compliance issues.
In addition, the massive sharing of infrastructure with cloud computing creates a significant
difference between cloud security and security in more traditional IT environments. Users
spanning different corporations and trust levels often interact with the same set of computing
resources. At the same time, workload balancing, changing service level agreements, and
other aspects of today's dynamic IT environments create even more opportunities for
misconfiguration, data compromise, and malicious conduct.
Infrastructure sharing calls for a high degree of standardized and process automation, which
can help improve security by eliminating the risk of operator error and oversight. However, the
risks inherent with a massively shared infrastructure mean that cloud computing models must
still place a strong emphasis on isolation, identity, and compliance.
Cloud computing is available in several service models (and hybrids of these models). Each
presents different levels of responsibility for security management. Figure 1 on page 3 depicts
the different cloud computing models. READ MORE>
JeffHebert 060001UEQ2 Tags:  reliable technology available real-time storage compression efficient business 7,162 Views
JeffHebert 060001UEQ2 Tags:  storage disk saas relicable ibm virtual efficient cloud paas 7,094 Views
JeffHebert 060001UEQ2 Tags:  management paas virtualization iaas information storage cloud data saas 7,045 Views
JeffHebert 060001UEQ2 Tags:  storage backup compression replication marketing it technology 29 emc ibm august social justin.tv data ntap no deduplication comme business cloud diaster real-time 2011 virtualization media video on recovery with 7,003 Views
JeffHebert 060001UEQ2 Tags:  ibm ntap real-time cloud storage marketing video compression backup data emc it technology media justin.tv social 6,997 Views
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they were, larger than life. The SiliconAngle / Wikibon Cube broadcasting live from VMworld2011. Guests that were on the cube included, Tom Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson (VMware) as well as many more. The Cube also had 12 Industry Spotlights. The most interesting spotlight had to do with Storage Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV live over the internet, the Cube has broadcast from a number industry shows and user conferences. The great part about this, it is like the ability to watch a sporting event being covered by ESPN but for tech. The Cube brings all of the highlights of these events right into your computer screen. Now if you can’t make an event, no problem, you can catch all the most important messages from the Cube. The Cube is now the new mechanism for delivering content to users in the way they want to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes Dave Vellante and his co-host John Furrier tee up the concept. They discussed storage optimization, where it has come and were it is going, especially in VMware environments. We are hearing more and more about storage efficiency technologies. During the next 15 minutes Dave and I discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression technology is not only the most efficient and effective compression technology in the industry; we also learned that IBM really acquired not just a real-time “compression” technology but a platform that can do a number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a VMware environment due to the fact that virtualizing servers was successful for the server side of the house but it didn’t do all it set out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage side of the house. Users have told us that when they virtualize their servers, storage grows as much as 4x. By leveraging the right storage optimization technologies together, users can get their budgets back under control and also deliver the promise that server virtualization set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley (HPsisyphus, formally @3ParFarley) Dave asked us what was the most interesting thing we saw on the show floor while walking around. I didn’t hesitate in my response. There were two in my mind. First, it couldn’t be any more obvious at how fast data is growing. Over 50% of the 19,000 people there had cameras taking pictures and taking video. That data is going to be stored somewhere. Additionally, they had these cameras for a reason. Either we have more bloggers and tweeters than we know about, more marketing people are going to these events or more people are using social media to inform and educate others. The way in which users want to receive data is always changing and evolving, and at least at VMworld 2011 we were delivering content in a number of ways especially photos and video. All that data will end up in the “cloud” somewhere.
The second thing I noticed was the amount of free time VMware has given back to the IT user. I heard, on more than one occasion, end users talking about family, vacations and travel instead of the usual banter about how challenging their jobs are and the issues they have with their vendors which is the normal think I hear at these shows. This was not an anomaly. I am chalking it up to the fact that VMware makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE most interesting thing I saw at the show. Now Proximal Data just came out of “stealth” in early August. They didn’t have a booth at VMworld but they did have a “whisper suite”. So, I have to confess, since I used to be an analyst, sometimes people will ask me to come take a look at their technology and their message to see if it is in line with what is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and growing topic with VMware these days, the I/O bottle neck on virtual servers, and they solve this problem in a very unique and intelligent way.
First, the problem. One of the issues facing VMware today is the number of virtual machines that can be hosted by one physical machine. The more users can get on one system, the more efficient they can be. The problem is, today systems are running into I/O workload bottlenecks that are causing a limitation in the number of virtual machines one system can run.
One way to solve this problem is add more memory to the host but that could be very very expensive. You can add more HBA’s or NIC cards but that can be expensive and also difficult to manage. You can add more flash cache to your storage to improve the I/O bottleneck but doing that only solves ½ the problem, you still need to solve the challenge on the host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management software capabilities combined with PCI flash cards on the host, for a very reasonable price per host. The software combined with the card is 100% transparent to both the virtual servers and to the storage, which to me is one of the most important features of the implementation. Transparency is the key to any new technology. IT has a ton of challenges and has done a great deal of work to get their environment to where it is today. To implement a technology that causes all of that work to be undone is very painful. Remember, the hardest thing to change in IT is process, not technology. It’s important to preserve the process. That is what Proximal Data does. Proximal Data can increase the I/O capability of a VMware server with just a 5 minute installation of the PCI card and their software. This technology can double and even triple the number of virtual machines on any physical server and that is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however Proximal does it transparently with no agents making it the most user friendly implementation. While these guys won’t have product until 2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are talking more and more as well as investing in public / private cloud. There are more systems popping up that have servers, networks, high availability and storage all in one floor tile. These systems are designed to integrate, scale, manage VM’s simply, increase productivity and ease the management of all possible application deployments in any business. Additionally these boxes help you to connect to the cloud to ease the cost burden. Is the pendulum swinging back to the “open systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a potential OEM prospect of the IBM Real-time Compression IP. I have always said that this technology could revolutionize the data storage business much like VxVM did for Veritas many years ago. Creating a standard way to do compression across a number of system can help users with implementation as well as ease the storage cost burden. I hope this moves forward and I hope more folks step up who want to OEM the technology.