SSD storage are clear, the cost is often prohibitive. But what if you
can target the data that really needs the performance edge at the SSD
drives? You could balance the cost against IT performance gains that
truly help your business perform. Read this...
Full Article at BNET While the performance advantages of SSD storage are clear, the cost is
often prohibitive. But what if you can target the data that really needs
the performance edge at the SSD drives? You could balance the cost
against IT performance gains that truly help your business perform. Read
this brief from Mesabi Group to see how IBM Storwize� V7000 "users now
have the tools � with the combination of Storage Tier Advisor and Easy
Tier � to be able to plan for and use SSDs appropriately in their
distinctive workload environments."
“Procedures for replacing or adding nodes to an existing cluster” Scope and Objectives The scope of this document is two fold. The first section provides a procedure for replacing existing nodes in a SVC cluster non-disruptively. For example, the current cluster consists of two 2145-8F4 nodes and the desire is to replace them with two 2145-CF8 nodes maintaining the cluster size at two nodes. The second section provides a procedure to add nodes to an existing cluster to expand the cluster to support additional workload. For example, the current cluster consists of two 2145-8G4 nodes and the desire is to grow it to a four node cluster by adding two 2145-CF8 nodes. The objective of this document is to provide greater detail on the steps required to perform the above procedures then is currently available in the SVC Software Installation and Configuration Guide, SC23-6628, located at www.ibm.com/storage/support/2145. In addition, it provides important information to assist the person performing the procedures to avoid problems while following the various steps. Section 1: Procedure to replace existing SVC nodes non-disruptively You can replace SAN Volume Controller 2145-8F2, 2145-8F4, 2145-8G4, and 2145-8A4 nodes with SAN Volume Controller 2145-CF8 nodes in an existing, active cluster without taking an outage on the SVC or on your host applications. In fact you can use this procedure to replace any model node with a different model node as long as the SVC software level supports that particular node model type. For example, you might want to replace a 2145-8F2 node in a test environment with a 2145-8G4 node previously in production that just got replaced by a new 2145-CF8 node. Note: If you are attempting to replace existing 2145-4F2 nodes with new 2145-CF8 nodes do not use this procedure as you must use the procedure specifically for this sort of upgrade located at the following URL: ftp://ftp.software.ibm.com/storage/san/sanvc/V5.1.0/pubs/multi/4F2MigrationVer1.pdf This procedure does not require changes to your SAN environment because the new node being installed uses the same worldwide node name (WWNN) as the node you are replacing. Since SVC uses this to generate the unique worldwide port name (WWPN), no SAN zoning or disk controller LUN masking changes are required.READ MORE>
The IBM XIV® Storage System demonstrates how storage can simplify
management and provisioning, yielding optimizing benefits especially for
virtualized server environments. This means that growth in data does
not mean growth in complexity. XIV has a virtualized, grid-based
architecture that enables self-tuning and self-healing, as well as
amazing management with simplicity and low total costs.
IBM® System Storage™ N series with Operations Manager software offers
comprehensive monitoring and management for N series enterprise storage
and content delivery environments. Operations Manager is designed to
provide alerts, reports, and configuration tools from a central control
point, helping you keep your storage and content delivery infrastructure
in-line with business requirements for high availability and low total
cost of ownership.
We focus especially on Protection Manager, which is designed as an
intuitive backup and replication management software for IBM System
Storage N series unified storage disk-based data protection
environments. The application is designed to support data protection and
help increase productivity with automated setup and policy-based
This IBM Redbooks® publication demonstrates how Operation Manager
manages IBM System Storage N series storage from a single view and
remotely from anywhere. Operations Manager can monitor and configure all
distributed N series storage systems, N series gateways, and data
management services to increase the availability and accessibility of
their stored and cached data. Operations Manager can monitor the
availability and capacity utilization of all its file systems regardless
of where they are physically located. It can also analyze the
performance utilization of its storage and content delivery network. It
is available on Windows® , Linux® , and Solaris™ . Read More>
Solid state drives (SSDs) based on flash memory are generating a lot of excitement. This enthusiasm is warranted because flash SSDs demonstrate latencies that are at least 10 times lower than the fastest hard disk drives (HDDs), often enabling response times more than 10X faster. For random read workloads, SSDs may deliver the I/O throughput of 30 or more HDDs while consuming significantly less power per disk. The performance of SSDscan reduce the number of fast-spinning hard disk drives you need in a storage system.Fewer disk drives translates into significant savings of power, cooling, and data center space. This performance benefit comes at a premium; flash SSDs are far more expensive per gigabyte of capacity than HDDs. Therefore SSDs are best applied in situations that require the highest performance.
The underlying flash memory technology used by SSDs has many advantages, particularly in comparison to DRAM. In addition to storage persistence, these advantages include higher density, lower power consumption, and lower cost per gigabyte. Because of these unique characteristics, NetApp is focusing on the targeted use of flash memory in storage systems and within your storage infrastructure in ways that can deliver the most performance acceleration for the minimum investment.
We are implementing flash memory solutions using SSDs for persistent storage, and we will also use flash memory directly to create expanded read caching devices. Caching can deliver performance that is comparable to or better than SSDs. Because you can complement a large amount of hard disk capacity with a relatively modest amount of read cache, caching is more cost effective for typical enterprise applications. As a result, more people can benefit from the performance acceleration achievable with flash technology.
You get even more flexibility and value from flash technology by combining it with the NetApp® unified storage architecture, which enables you to leverage your investment in flash memory to simultaneously accelerate multiple applications, whether they use SAN or NAS. Storage efficiency features such as deduplication for primary storage further increase your power, cooling, and space savings.
This white paper is an overview of NetApp’s plan to deliver SSDs (both native and virtualized arrays) plus flash-based read caching and of our ability to further leverage both of these technologies in caching architectures. Selection guidelines are provided to help you choose the right technology to reduce latency and increase your transaction rate while taking into consideration cost versus benefit.
Cloud security: the grand challenge In addition to the usual challenges of developing secure IT systems, cloud computing presents an added level of risk because essential services are often outsourced to a third party. The externalized aspect of outsourcing makes it harder to maintain data integrity and privacy, support data and service availability, and demonstrate compliance. In effect, cloud computing shifts much of the control over data and operations from the client organization to their cloud providers, much in the same way organizations entrust part of their IT operations to outsourcing companies. Even basic tasks, such as applying patches and configuring firewalls, can become the responsibility of the cloud service provider, not the user. This means that clients must establish trust relationships with their providers and understand the risk in terms of how these providers implement, deploy, and manage security on their behalf. This trust but verify relationship between cloud service providers and consumers is critical because the cloud service consumer is still ultimately responsible for compliance and protection of their critical data, even if that workload had moved to the cloud. In fact, some organizations choose private or hybrid models over public clouds because of the risks associated with outsourcing services. Other aspects about cloud computing also require a major reassessment of security and risk. Inside the cloud, it is difficult to physically locate where data is stored. Security processes that were once visible are now hidden behind layers of abstraction. This lack of visibility can create a number of security and compliance issues. In addition, the massive sharing of infrastructure with cloud computing creates a significant difference between cloud security and security in more traditional IT environments. Users spanning different corporations and trust levels often interact with the same set of computing resources. At the same time, workload balancing, changing service level agreements, and other aspects of today's dynamic IT environments create even more opportunities for misconfiguration, data compromise, and malicious conduct. Infrastructure sharing calls for a high degree of standardized and process automation, which can help improve security by eliminating the risk of operator error and oversight. However, the risks inherent with a massively shared infrastructure mean that cloud computing models must still place a strong emphasis on isolation, identity, and compliance. Cloud computing is available in several service models (and hybrids of these models). Each presents different levels of responsibility for security management. Figure 1 on page 3 depicts the different cloud computing models. READ MORE>
Last week I was briefing Dan Kusnetzky, storage analyst from the Kusnetzky Group. I was briefing him on the value proposition of Real-time Compression and it's value proposition to all down stream processes, especially backup. Specifically I told him that there is NO technology available today that can have even 50% of the effect on the existing backup process that Real-time Compression could have without changing any architecture in the backup process.
Dan agreed, in fact, he told me that the Real-time Compression technology meets the "Golden Rules of IT". I asked Dan, "What are the Golden Rules of IT?" and he enlightened me. I didn't make these up so I can't take credit but thought they were definitely worth sharing, and a good rule of thumb to follow for IT. Here they are:
If its not broke, don't fix it.
Don't touch it, you'll break it.
If you touched it, you broke it.
Good enough, is good enough.
Accept your "jerkdom" (Everybody is an Monday morning quarterback)
I have to agree, these are good rules to follow and a great complement to the Real-time Compression technology. The fact that this technology fits into any storage environment, transparently and can optimize storage up to 5x without any performance impact, is very simple and one of the only ways to have a significant, compounding budgetary affect for very little dough.
WASHINGTON - 01 Mar 2011:IBM (NYSE:IBM) today announced a major expansion of its Institute for Electronic Government (IEG) in Washington, D.C., adding cloud computing and analytics capabilities for public sector organizations around the world.
IBM has moved and expanded the facility in order to meet the growing demand from Government, Health Care and Education leaders who recognize the potential of cloud computing environments and business analytics technologies to improve efficiencies, reduce costs and tackle energy and budget challenges.
According to recent IBM surveys of technology leaders globally, 83 percent of respondents identified business analytics -- the ability to see patterns in vast amounts of data and extract actionable insights -- as a top priority and a way in which they plan to enhance their competitiveness. In addition, an overwhelming majority of respondents -- 91 percent -- expect cloud computing to overtake on-premise computing as the primary IT delivery model by 2015.
Technology giant IBM on Tuesday said it has emerged as
the top player in the Indian external disk storage systems for the year
According to IT research firm IDC, IBM India
has maintained its 2010 leadership with a 26.2 per cent market share (in
revenue terms) and over four per cent points lead over its nearest
“While the overall external disk storage
market in India declined to 1.5 per cent in calender year 2010,
according to IDC, IBM has been able to grow its hold in the country
given the constant innovation and focus on bringing in storage
efficiency,” Sandeep Dutta, Storage, Systems and Technology Group, IBM
India/ South Asia told PTI.
Also, in Q4 2010, IBM
maintained leadership with a 29 per cent market share and a seven per
cent point lead over its nearest competitor in revenue terms.
the year 2010, IBM launched products like IBM StorwizeV7000 and IBM
System Storage DS8000, which helped it to strengthen its leadership
position in the market.
During the year, IBM bagged
orders from Kotak, Suzlon, Oswal mills, CEAT, L&T (ECC division),
Indian Farmer and Fertilizer Cooperative Ltd, Solar Semiconductors and
Ratnamani Metals. Read More>
That includes all the same features like replication, thin provisioning, self-optimized flash tier and Cloud Agile, which is the ability to take advantage of cloud storage technology for replication and recovery of data, all in an array that lists starting at under $11,000, Walsh said. Read More>
There has been significant discussion in the industry about
storage optimization and making better use of storage capacity. A number
of storage vendors have successfully marketed data de-duplication for offline/backup applications, reducing the volume of backup data by a factor of 5-15:1, according to Wikibon user input.
Data de-duplication as applied to backup use cases is different
from compression in that compression actually changes the data using
algorithms to create a computational byproduct and write fewer bits.
With de-duplication, data is not changed, rather copies 2-N are deleted
and pointers are inserted to a 'master' instance of the data.
Single-instancing can be thought of as synonymous with de-duplication.
Traditional data de-duplication technologies however are
generally unsuitable for online or primary storage applications because
the overheads associated with the algorithms required to de-duplicate
data will unacceptably elongate response times. As an example, popular
data de-duplication solutions such as those from Data Domain, ProtecTier
(Diligent/IBM), Falconstor and EMC/Avamar are not used for reducing
capacities of online storage.
There are three primary approaches to optimizing online storage,
reducing capacity requirements and improving overall storage
efficiencies. Generally, Wikibon refers to these in the broad category
of on-line or primary data compression, although the industry will often
use terms like de-duplication (e.g. NetApp A-SIS) and single
instancing. These data reduction technologies are illustrated by the
following types of solutions:
NetApp A-SIS and EMC Celerra which employ either “data de-duplication light” or single-instance technology embedded into the storage array;
Each of these approaches has certain benefits and drawbacks. The
obvious benefit is reduced storage costs. However each solution places
another technology layer in the network and increases complexity and
Array-based data reduction
Array-based data reduction technologies such as A-SIS operate
in-line as data is being written to reduce primary storage capacity. The
de-duplication feature of WAFL (NetApp’s Write Anywhere File Layout)
allows the identification of duplicates of a 4K block at write time
(creating a weak 32-bit digital signature of the 4K block, which is then
compared bit-by-bit to ensure that there is no hash collision) and
placed into a signature file in the metadata. The work of identifying
the duplicates is similar to the snap technology and is done in the
background if controller resources are sufficient. The default is once
every 24 hours and every time the percentage of changes reaches 20%.
In addition, there are three main disadvantages of an A-SIS solution, including:
With A-SIS, de-duplication can only occur within a single
flex-volume (not traditional volume), meaning candidate blocks must be
co-resident within the same volume to be eligible for comparison. The
deduplication is based on 4k fixed blocks, rather than the variable
block of (say) IBM/Diligent. This limits the de-duplication potential.
There is a complicated set of constraints when A-SIS is used
together with different snaps depending on the level of software. Snaps
made before deduplication will overrule de-duplication candidacy in
order to maintain data integrity. This limits the space savings
potential of de-dupe. Specifically, NetApp's de-dupe is not cumulative
to space efficient snapshots. See (technical description);
The performance overheads of deduplication as described above
mean that A-SIS should not be applied to a highly utilized controller
(where the most benefit is likely to be achieved);
There is an overhead of for the metadata (up to 6%)
To exploit this feature, users are locked-in to NetApp storage.
IT Managers should note that A-SIS is included as a no-charge
standard offering within NetApp's Nearline component of ONTAP, the
company's storage OS.
Host-managed offline data compression solutions
is an example of a host-managed data reduction offering or what it
calls 'split-path.' It consists of an offline process that reads files
through an appliance, compresses those files and writes them back to
disk. When a file is requested, another appliance re-hydrates data and
delivers it to the application. The advantage of this approach is much
higher levels of compression because the process is offline and uses
many more robust algorithms. A reasonable planning assumption of
reduction ratios will range from 3-6:1 and sometimes higher for initial
ingestion and read-only Web environments. However, because of the need
to re-hydrate when new data is written, classical production
environments may see lower ratios.
In the case of Ocarina, the company has developed proprietary
algorithms that can improve reduction ratios on many existing file types
(e.g. jpeg, pdf, mpeg, etc), which is unique in the industry.
The main drawbacks of host-managed data reduction solutions are:
The expense of the solution is not insignificant due to
appliance and server costs needed to perform compression. In
infrequently accessed, read-only or write-light environments, these
costs will be justified.
To achieve these benefits, all files must be ingested, which is
a slow process. Picking the right use cases will minimize this issue.
After a file is read and modified, it is written back to disk
as uncompressed. To achieve savings, files must be re-compressed again
limiting use cases to infrequently accessed files.
Ocarina currently supports only files, unlike NetApp A-SIS
which supports both file and block-based storage. However Ocarina's
implementation offers several advantages over A-SIS (remember A-SIS is
The solution is not highly scalable because the processes related to backup, re-hydration, and data movement are complicated.
On balance, solutions such as Ocarina are highly suitable and
cost-effective for infrequently accessed data and read-intensive
applications. High update environments should be avoided.
In-line data compression
IBM Real-time Compression offers in-line data compression whereby a device sits between servers and the storage network (see Shopzilla's architecture). Wikibon members indicate a compression ratio of 1.5-2:1 is a reasonable rule-of-thumb.
The main advantage of the IBM Real-time Compression approach is
very low latency (i.e. microseconds) and improved performance. Storage
performance is improved because compression occurs before data hits the
storage network. As a result, all data in the storage network is
compressed, meaning less data is sent through the SAN, cache, internal
array, and disk devices, minimizing resource requirements and backup
windows by 40% or more, according to Wikibon estimates.
There are two main drawbacks of the IBM Real-time Compression approach, including:
Costs of appliances and network re-design to exploit the compression devices. The Wikibon community estimates clear ROI will be realized in shops with greater than 30TB's;
Complexity of recovery, specifically users need to plan for
re-hydration of data when performing recovery of backed up files (i.e.
they need to have a Storewize engine or software present to recover from
a data loss).
On balance, the advantages of an Ocarina or IBM Real-time Compression
approach are they can be applied to any file-based storage (i.e.
heterogeneous devices). NetApp and other array-based solutions lock
customers into a particular storage vendor but have certain advantages
as well. For example, they are simpler to implement because they are
An Ocarina approach is best applied in read-intensive
environments where it will achieve better reduction ratios due to its
post-process/batch ingestion methodology. IBM Real-time Compression will
achieve the highest levels of compression and ROI in general purpose
enterprise data centers of 30TB's or greater.
Action Item: On-line data reduction is rapidly coming to
mainstream storage devices in your neighborhood. Storage executives
should familiarize themselves with the various technologies in this
space and demand that storage vendors apply capacity optimization
techniques to control storage costs.
Two solid days at VMworld 2011 and I got to do and see a lot. Here is a breakdown of the top 5 things I saw at VMworld.
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they
were, larger than life. The SiliconAngle / Wikibon Cube broadcasting
live from VMworld2011. Guests that were on the cube included, Tom
Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson
(VMware) as well as many more. The Cube also had 12 Industry
Spotlights. The most interesting spotlight had to do with Storage
Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV
live over the internet, the Cube has broadcast from a number industry
shows and user conferences. The great part about this, it is like the
ability to watch a sporting event being covered by ESPN but for tech.
The Cube brings all of the highlights of these events right into your
computer screen. Now if you can’t make an event, no problem, you can
catch all the most important messages from the Cube. The Cube is now
the new mechanism for delivering content to users in the way they want
to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes
Dave Vellante and his co-host John Furrier tee up the concept. They
discussed storage optimization, where it has come and were it is going,
especially in VMware environments. We are hearing more and more about
storage efficiency technologies. During the next 15 minutes Dave and I
discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression
technology is not only the most efficient and effective compression
technology in the industry; we also learned that IBM really acquired not
just a real-time “compression” technology but a platform that can do a
number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a
VMware environment due to the fact that virtualizing servers was
successful for the server side of the house but it didn’t do all it set
out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage
side of the house. Users have told us that when they virtualize their
servers, storage grows as much as 4x. By leveraging the right storage
optimization technologies together, users can get their budgets back
under control and also deliver the promise that server virtualization
set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley
(HPsisyphus, formally @3ParFarley) Dave asked us what was the most
interesting thing we saw on the show floor while walking around. I
didn’t hesitate in my response. There were two in my mind. First, it
couldn’t be any more obvious at how fast data is growing. Over 50% of
the 19,000 people there had cameras taking pictures and taking video.
That data is going to be stored somewhere. Additionally, they had these
cameras for a reason. Either we have more bloggers and tweeters than
we know about, more marketing people are going to these events or more
people are using social media to inform and educate others. The way in
which users want to receive data is always changing and evolving, and at
least at VMworld 2011 we were delivering content in a number of ways
especially photos and video. All that data will end up in the “cloud”
The second thing I noticed was the amount of free time VMware has
given back to the IT user. I heard, on more than one occasion, end
users talking about family, vacations and travel instead of the usual
banter about how challenging their jobs are and the issues they have
with their vendors which is the normal think I hear at these shows.
This was not an anomaly. I am chalking it up to the fact that VMware
makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE
most interesting thing I saw at the show. Now Proximal Data just came
out of “stealth” in early August. They didn’t have a booth at VMworld
but they did have a “whisper suite”. So, I have to confess, since I
used to be an analyst, sometimes people will ask me to come take a look
at their technology and their message to see if it is in line with what
is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and
growing topic with VMware these days, the I/O bottle neck on virtual
servers, and they solve this problem in a very unique and intelligent
First, the problem. One of the issues facing VMware today is the
number of virtual machines that can be hosted by one physical machine.
The more users can get on one system, the more efficient they can be.
The problem is, today systems are running into I/O workload bottlenecks
that are causing a limitation in the number of virtual machines one
system can run.
One way to solve this problem is add more memory to the host but that
could be very very expensive. You can add more HBA’s or NIC cards but
that can be expensive and also difficult to manage. You can add more
flash cache to your storage to improve the I/O bottleneck but doing that
only solves ½ the problem, you still need to solve the challenge on the
host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management
software capabilities combined with PCI flash cards on the host, for a
very reasonable price per host. The software combined with the card is
100% transparent to both the virtual servers and to the storage, which
to me is one of the most important features of the implementation.
Transparency is the key to any new technology. IT has a ton of
challenges and has done a great deal of work to get their environment to
where it is today. To implement a technology that causes all of that
work to be undone is very painful. Remember, the hardest thing to
change in IT is process, not technology. It’s important to preserve the
process. That is what Proximal Data does. Proximal Data can increase
the I/O capability of a VMware server with just a 5 minute installation
of the PCI card and their software. This technology can double and even
triple the number of virtual machines on any physical server and that
is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however
Proximal does it transparently with no agents making it the most user
friendly implementation. While these guys won’t have product until
2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are
talking more and more as well as investing in public / private cloud.
There are more systems popping up that have servers, networks, high
availability and storage all in one floor tile. These systems are
designed to integrate, scale, manage VM’s simply, increase productivity
and ease the management of all possible application deployments in any
business. Additionally these boxes help you to connect to the cloud to
ease the cost burden. Is the pendulum swinging back to the “open
systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a
potential OEM prospect of the IBM Real-time Compression IP. I have
always said that this technology could revolutionize the data storage
business much like VxVM did for Veritas many years ago. Creating a
standard way to do compression across a number of system can help users
with implementation as well as ease the storage cost burden. I hope
this moves forward and I hope more folks step up who want to OEM the
A quick summary of the latest announcements by Nick Harris
In the cover story this month, Lee
Cleveland, Distinguished Engineer, Power Systems direct attach storage, and
Andy Walls, Distinguished Engineer, chief hardware architect for DS8000 and
solid-state drives (SSDs), sat down to talk about all of the new storage
technologies IBM has been releasing lately. What I didn’t have room for in the
article was a nice summary of the technologies that can help you improve
access, manage growth, protect data, reduce costs or reduce complexity.
Whatever your goals, IBM has an integrated storage option for every
Here are the quick highlights of the
latest storage announcements:
New advanced software functions
New easy-to-use, Web-based GUI
RAID and enclosure RAS services and diagnostics
Additional host, controller and ISV interoperability
Integration with IBM Systems Director
Enhancements to Tivoli Storage Productivity Center (TPC), FlashCopy Manager
(FCM) and Tivoli Storage Manager (TSM) support
Proven IBM software functionalities
Easy Tier (dynamic HDD/SSD management)
RAID 0, 1, 5, 6, 10
Storage virtualization (local and external disks)
Non-disruptive data migration
Global and Metro Mirror
FlashCopy up to 256 copies of each volume
IBM Storwize Rapid Application
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1
(with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 days
Faced with a nasty loss of credibility, a string of poor financial
results, shrinking market share in its core business, an unwieldy and
alienating bureaucracy blamed for the top executive exodus it been
experiencing, and a stock price that's plunged into the toilet Cisco,
once an economic bellwether, is promising to do more than simply kill
off its once-popular Flip video camcorder business and lay 550 people
off, an admission that its foray into the consumer segment had largely
It said in a press release issued Thursday morning that it's going to
a "streamlined operating model" focused on five areas, not apparently
the literally 30 different directions it's been going in although it did
say, come to think of it, something about "greater focus" so maybe it's
not really cutting back.
These focus areas are, it said, "routing, switching, and services;
collaboration; data center virtualization and cloud; video; and
architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street
Journal criticized Cisco for not being able to explain in plain English
what it's doing and Barron's complained that it needed a Kremlinologist
to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales,
services and engineering organizations in the next 120 days or by July
31 when its next fiscal year begins. Well, maybe not everything, it
warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
Field operations will be organized into three geographic regions
for faster decision making and greater accountability: the Americas,
EMEA and Asia Pacific, Japan and Greater China still under sales chief
Services will follow key customer segments and delivery models still under its multi-tasking COO Gary Moore;
Engineering, still reporting to Moore, will now be led by
two-in-a-box Pankaj Patel and Padmasree Warrior and aside from the
company's five focus areas there will be a dedicated Emerging Business
Group under Marthin De Beer focused on "select early-phase businesses"
"with continued focus on integrating the Medianet architecture for video
across the company."
Lastly, it's going to "refine" - but apparently not dismantle its
hydra-headed, decision-inhibiting Council structure blamed for
frustrating and running off key talent - down to three "that reinforce
consistent and globally aligned customer focus and speed to market
across major areas of the business: Enterprise, Service Provider and
Emerging Countries. These councils will serve to further strengthen the
connection between strategy and execution across functional groups.
Resource allocation and profitability targets will move to the sales and
engineering leadership teams which will have accountability and direct
responsibility for business results."
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on
making a series of changes throughout the next quarter and as we enter
the new fiscal year that will make it easier to work for and with Cisco,
as we focus our portfolio, simplify operations and manage expenses. Our
five company priorities are for a reason - they are the five drivers of
the future of the network, and they define what our customers know
Cisco is uniquely able to provide for their business success. The new
operating model will enable Cisco to execute on the significant market
opportunities of the network and empower our sales, service and
"As the world becomes more interconnected, instrumented and intelligent,
more and more information is created. This influx of information creates
both challenges and opportunities. Companies must build smarter
information infrastructures that can handle all of this information and
manage it intelligently. IBM has invested billions of dollars developing
smart storage solutions that embody a set of essential technologies:
virtualization, thin provisioning, deduplication, compression and
automated tiering that will enable you to manage the influx of
information and unlock new business opportunities." http://www-03.ibm.com/systems/storage/news/announcement/20101007.html
In many IT departments, increased user demand has led to haphazard
storage growth, resulting in sprawling, heterogeneous storage
environments. These environments make it difficult to achieve optimal
utilization and to provision storage capacity for new users and
applications. Storage virtualization can put an end to these problems.
It enables companies to logically aggregate disk storage so capacity can
be efficiently allocated across applications and users.
"Since October 2010
IBM Corp. announced
workload-optimized systems to help companies manage a range of more
demanding workloads that are placing new stresses on already over-taxed
The offerings, which span IBM's systems portfolio, represent IBM's
investment in systems integrated and optimized across chips, hardware
and software, for a range of work at a time when companies face amounts
of data and are under pressure to become more efficient in managing and
drawing timely insights from the information.
The new systems include: A new offering for the zEnterprise BladeCenter
Extension (zBX), IBM's systems design that allows workloads on mainframe
servers and other select systems to share resources and be managed as a
single, virtualized system; and key new Storage and System x products,
which can bring new levels of efficiency to the data center." http://www.storagenewsletter.com/news/business/ibm-shipped-1000-storwize-v7000
by Steve Kenniston After landing in Warsaw, I got into a car with the local sales leader
for Poland and we drove to the event location. It was a 2 hour drive.
First, the roads and the land in Poland reminded me very much of my
home time in Maine. Very scenic and rural but beautiful and peaceful.
We talked storage for 2 hours and I am always festinated at the thirst
for knowledge there is when I travel. It was a great ride followed up
by a customer reception and some local Polish brew.
Thursday I spent the day in Sterdyn, Poland for IBM Storage
University. There were 30 customers at the event and it went very very
well. The event was at Palac Ossolinski,
today used as an event center but has a very rich history, in fact at
one point it was used as a medical facility in WWII. The photo is of
the building where we had the event. The topics we covered were:
The customers were very interactive and provided a lot of insight to
their environments. Interestingly enough I learned during our customer
reception that IBM storage is #1 in Poland with HP second and EMC
third. This is a true testament to the IBM sellers and the customers
who use the IBM products every day to drive their business. I also
learned that the data break down in Poland is 90% block, 10% file which I
found interesting and would be interested to check back 12 months from
today to see how it will be different.
I did learn something very interesting in Poland. The question was
asked “Why XIV”? What is so special about XIV. The answer was
awesome. The answer started with 2 questions:
1) How old is RAID?
2) How old is your iPhone?
The reality is data growth is out pacing what traditional RAID can
handle and data profiles are changing as well. These combined have
driven new technologies like Cleversafe, Cloud Computing, Hadoop and
XIV. Just like the iPhone is a new approach to the smart phone based on
new things we know about how these smart phones are being used, we know
more about how data and storage is being used. New ways to deliver
capacity and performance are needed in order to keep up with the
changing times. I thought it was a very good answer in terms that make
Thursday evening I traveled back to Warsaw where I got in a bit late
and just went to a local pub, Sketch. Grabbed a small bite and some
local mead and then headed back to the hotel. I did get to see the
local Palace of Culture and Science in the middle of Warsaw, very
impressive, built as a gift from Russia to Poland.
I have an early flight to Prague. I am very excited about this part
of the journey as I have always wanted to travel to Prague. Press
meeting right when I land. Stay tuned.
IBM Real-time Compression appliances reduce storage capacity
utilization by up to 80% without performance degradation. IBM Real-time
Compression appliances increase the capacity of existing storage
infrastructure helping organizations meet the demands of rapid data
growth while also enhancing storage performance and utilization. The
result is unprecedented cost savings, ROI, operational and environmental
The IBM Real-time Compression appliances address data
optimization on primary storage so your capacity is optimized across all
tiers of storage. The IBM Real-time Compression Appliance STN6500 and STN6800
align to your existing storage networking configuration for easy
installation. The appliances install transparently in front of your
existing NAS storage and thru patented real-time compression reduces the
size of every file created. Read more>
by Steve Kenniston The first city on my Eastern European trip was Moscow. I think the
traffic here is worse than the 101 in Silicon Valley during the dot com
era. That said, it was a great visit. I spoke at the Information
Infrastructure Conference at the Swissotel convention center in Moscow.
It was the first time I spoke to a group of people with an
interpreter. It was like being at the UN. The two main topics were
Storage Efficiency and Real-time Compression.
I spoke with a few customers and the press and in dealing with the
data growth challenges they wanted to know, “When it comes to big data,
what is next, is it ‘huge data’”? Data growth clearly a concern.
Interesting enough though most of the questions, came around my title of
“Evangelist”. One report told me, “if an Evangelist is ‘preaching the
word of storage’ then why not just call yourself an Apostle”? How do
you think that would look on an IBM business card: Global Storage
The next day I did a day of “sales enablement” in the Moscow office.
We discussed mostly how to sell and position Real-time Compression and
what is next for the technology. I was very impressed with the team.
They were very technical and knew quite a bit about Real-time
Compression and really wanted to know in more detail how the technology
was invented. This means they are really talking about the technology
and the customers are drilling down into the next level of detail.
There are a lot of good opportunities for the technology in Moscow and I
look forward to hearing more about the success of Real-time Compression
I didn’t have a lot of time to sight see but I did make it to Red
Square. You can actually buy a beer outside in Red Square and walk
around. So I did. I took a few photos and then as the US was getting
going, I had some work calls to attend to. That evening I spent on the
34th floor of my hotel having dinner. It was a great view of Moscow. I hope to come back.
“Storage Efficiency” has become a big topic over the past 12 months. There are a number of new technologies that have come out in the last few years that are helping to deal with storage growth. We all know that data is the root of the decisions that drive business today. The more data you have, hopefully, the better decisions you can make to drive your business to success. The question is, “what is the value (and hence the cost) of the infrastructure to create that success?” What we do know is that the ability to put more data in a highly efficient footprint can give your company a competitive edge. There are five technologies that can help an IT organization create an efficient storage infrastructure. These are:
3) Thin Provisioning
It is also important to point out that there are some semantics when talking about storage efficiency, specifically between efficiency and optimization technologies. I think it is useful to attempt to define these as they lead us to picking the right solutions for what we are trying to accomplish. For the purpose of this post, efficiency will relate to making existing capacity more useful and optimization will mean making more capacity out of existing capacity.
Using these definitions, technologies such as Tiering, Virtualization and Thin Provisioning are efficiency technologies. These technologies help to utilize the existing capacity that you have.
Tiering is technology that is used on about 10% of your data or less. It is used to move data that requires higher performance to flash storage. Good tiering technology analyzes data access patterns and moves the most active data to the highest performing disk. It doesn’t really change the amount of physical capacity that is required; it just changes whattypeof capacity is required and allows IT to make sure data is operating as fast and efficiently as possible.
Virtualization technology allows IT to make sure disk utilization is used as efficiently as possible. Until recently storage utilization rates were around 50%. By leveraging virtualization technology, IT can group pools of storage so they don’t need to purchase capacity needlessly. Virtualization can be used on 50% to 60% of your storage but it doesn’t change your physical capacity infrastructure requirements and at most allows users to take advantage of 20% to 40% of their capacity that they once didn’t access.
Similar to Virtualization technology, thin provisioning technology also can be used on 50% to 60% of your capacity however, thin provisioning technology gives IT about 10% to 40% of their capacity back. Thin Provisioning helps IT manage their existing capacity and their utilization by being able to make capacity available to users much easier again however it doesn’t change the amount of physical storage infrastructure required.
Optimization technologies help IT to better manage their physical storage footprint. Optimization technologies optimize existing infrastructure by allowing users to put more capacity in the physical same space. The two technologies that are currently used today are data deduplication and real-time compression.
Optimization technologies are a bit tricky. There is a balance that is required between optimization and performance and availability. At the end of the day, IT chooses the storage it buys with two very important characteristics in mind, performance and availability. Optimization technologies can not affect these characteristics. It is for this reason that data deduplication really isn’t ready for “prime time” on primary, active storage. Data deduplication creates too much of a performance impact on primary, active data. Today, data deduplication could be used on about 10% to 15% of the primary, less active capacity that is in the data center and only provides about 30% to 50% overall optimization. In other words deduplication technology can impact the physical infrastructure by as much as 10%, meaning IT may not need to buy as much physical capacity.
Real-time compression, on the other hand, has one of the most dramatic affects on primary storage capacity. Real-time compression can be used on as much as 85% of the storage footprint and can compress data between 50% and 80%. That said Real-time compression could have IT purchase as much as 70% less overall storage capacity. Real-time compression also does not affect the main characteristics for which users buy storage (performance and availability). IT could have as much as 70% less footprint but keep the same amount of data or more on-line. Additionally, IT can now purchase storage opportunistically without having to have such a dramatic impact on their infrastructure, process or budgets. This allows companies to keep more capacity on line and available to help companies do more analytics on more capacity and become more competitive.
When deciding which storage efficiency technology will have a more effective impact on your overall environment and budget, start with optimization technologies and start to get the data growth under control. Adding value to the line of business that can drive revenue with more data will make you a hero and your business more successful.
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications. Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift. Key Findings
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar. Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications. Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market. Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas. Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 --
Brocade (NASDAQ: BRCD) today announced that FleetCor,
a leading independent global provider of specialized payment products
and services to businesses, commercial fleets, major oil companies,
petroleum marketers and government fleets, has selected Brocade as the
vendor to build its cloud-optimized
network. This new network enhances FleetCor's ability to securely
process millions of transactions monthly and ultimately better serve its
commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor
cardholders worldwide, and they are used to purchase billions of gallons
of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help
evolve its data center and IT operations into a more agile private cloud
infrastructure. Brocade® cloud-optimized networks
are designed to reduce network complexity while increasing performance
and reliability. Brocade solutions for private cloud networking are
purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we
looked at market leadership and non-stop access to critical data," said
Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade
cloud-optimized networking solutions are perfect for our data centers
because they allow us to optimize applications faster, virtually
eliminate downtime and help us meet service level agreements for our
customers. Moving to a cloud-based model also provides us the
flexibility to make adjustments on the fly and access secure information
virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data
centers, citing scalability as a major driver for the purchase. This
approach enables FleetCor to virtualize its geographically distributed
data centers and leverage the equipment it already has, at the highest
level, to achieve maximum return on investment. The Brocade MLXe
provides additional benefits for FleetCor by using less power and has a
smaller footprint than competitive routers; critical in power-and
space-constrained locations in order to allow for growth. The Brocade
MLXe also enables continuous business operation for FleetCor based on
Multi-Chassis Trunking, massive scalability supporting highest 100 GbE
density in the industry with no performance degradation for advanced
features like IPv6 and flexible chassis options to meet network and
The Brocade ServerIron ADX
Series of high-performance application delivery switches provides
FleetCor with a broad range of application optimization functions to
help ensure the reliable delivery of critical applications.
Purpose-built for large-scale, low-latency environments, these switches
accelerate application performance, load-balance high volumes of data
and improve application availability while making the most efficient use
of the company's existing infrastructure. It also delivers dynamic
application provisioning and de-provisioning for FleetCor's highly
virtualized data center, enables seamless migration and translation to
IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers
FleetCor has eliminated thousands of costly networking cables, saving
it hundreds of thousands of dollars and allowing the company to segment,
streamline and secure its network. FleetCor has also been able to
easily integrate Brocade network technology with third-party offerings
already installed in the network, for complete investment protection.
FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for
its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in
each of our data centers to help us leverage the benefits of cloud
computing and the Brocade MLXe delivered on all fronts," said Keirbeck.
"By virtualizing our data center, Brocade allows for non-stop access to
the mission-critical data that FleetCor and its customers rely on every
day. We chose the Brocade MLXe because of the tremendous results we
already saw from our existing Brocade solutions and the exceptional
support and service."
According to a report from analyst firm Gartner, "Although 'economic
affordability' is an immediate, attractive benefit, the biggest
advantages (of cloud services) result from characteristics such as
built-in elasticity and scalability, reduced barriers to entry,
flexibility in service provisioning and agility in contracting."(1)
Businesses continue to search for storage solutions that save money
without sacrificing performance. Last year, IBM introduced Scale Out
Network Attached Storage (SONAS), the industry’s first such
network-attached storage (NAS) offering to address this business need.
SONAS is an enterprise class, NAS system that provides extreme
scalability, availability and security—and does so with record-breaking
performance. It’s designed as a single global repository to manage
multiple petabytes of storage and billions of files all under one file
In April, IBM announced significant performance enhancements to
SONAS: improved information lifecycle management (ILM), hierarchical
storage management (HSM) as well as ease of deployment and antivirus
Todd Neville, SONAS program leader at IBM, says SONAS is unique in
that it can very near-linearly scale to almost any performance level.
With SONAS, he says, “You can build a system that’s as fast as you want
it to be; but it’s not just about absolute size, it’s also about bang
for your buck. We’ve significantly increased the software performance in
our upcoming release 1.2, so customers see a significant performance
increase on their current platform with no additional costs.”
Funda Eceral, SONAS market segment manager at IBM, says SONAS is the
only true scale-out NAS system available in the marketplace. “While you
can nondisruptively add capacity with storage building blocks,” Eceral
says, “you can also still continue to independently scale out your I/O
performance with interface nodes. It brings operational efficiency and
extraordinary utilization rates for each customer.”
Three Key Features
This version of SONAS offers three key features, according to Neville:
Ease of deployment. Using Network Data Management Protocol
(NDMP), a SONAS device can be easily integrated into existing
data-center backup infrastructures. “If you have an enterprise backup
deployment using NDMP, you will be able to take SONAS and quickly
connect with a wide variety of popular backup systems,” Neville says.
Built-in antivirus integration. Scalable NAS storage devices
must have a way for an antivirus function to perform scans on files
intelligently, such as when they’re opened or closed. SONAS includes a
built-in functionality that lets a third party like Symantec integrate
into the SONAS device to perform antivirus operations, as simple “full
file-system scans” become cumbersome at enterprise scales.
Physical size. Neville says customers asked IBM to make the
SONAS device more compact, although it supports almost a full petabyte
in a single rack, making it the only offering in IBM’s NAS portfolio
that can do so. It’s now 10 inches shorter than the original device, can
scale up to 14.4 petabytes (with 2 TB drives) and has a single point of
management, which can significantly reduce storage-administration
“Everyone says, ‘We do tiering, HSM and ILM,’ but design
matters—IBM does it differently.” —Todd Neville, SONAS program leader,
“Everyone says, ‘We do tiering, HSM and ILM,’ but design matters—IBM does it differently.” —Todd Neville, SONAS program leader, IBM Next Page >>
Backups are a necessity. They’re important in any computing environment, and you would be hard pressed to find anybody who would disagree with the criticality of having backup copies of their data. In the event that primary systems or data sets are unavailable, backups are designed to provide the assurance that significant amounts of work, time or money aren’t lost.
To protect the partners, customers and constituents of organizations from risks associated with potential data loss, the U.S. federal government has established various compliance requirements that must be met and maintained. In addition to general business-compliance requirements, many industries have additional regulations that must be met. Examples include Sarbanes-Oxley Act of 2002 (SOX), Payment Card Industry Data Security Standard (PCI DSS), the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), Gramm-Leach-Bliley Act (GLBA), and the Federal Information Security Management Act (FISMA); it’s easy to see why compliance is often referred to as regulatory alphabet soup (which is not far off from the storage industry, I would add).
Depending on the industry, the mandated data-retention timeframe can vary from a few as seven years to as many as 100 years. At the upper end of that spectrum, a significant amount of infrastructure investment and planning is necessary. Unfortunately, systems complexity becomes a byproduct of trying to solve these challenges and that complexity evolves over time until it becomes unmanageable.
Just as the specific requirements for these regulations vary, so do the consequences of being non-compliant, which is often discovered during periodic industry audits or following a breach. Failure to meet compliance requirements could result in warnings or fines, and in extreme cases, termination of operations and prison time. The trouble is: Compliance testing can be difficult to do, and it can come down to having a confidence in whether or not systems will be able to perform adequately under trial.
Project to Streamline IT Infrastructure to Improve Service Delivery, Reduce Energy Consumption and Strengthen Security NEW YORK, N.Y.
31 Jan 2011:
IBM (NYSE: IBM) today
announced that it has been selected by the City of New York to build a
more efficient, smarter technology platform for CITIServ, the City's IT
infrastructure modernization program. The goal of the project is to
streamline delivery of City services by consolidating and updating
outdated and incompatible IT, thereby reducing energy consumption,
strengthening security, and providing City workers with faster access to
the latest technologies.
In the cover story this month,
Lee Cleveland, Distinguished Engineer, Power Systems direct attach
storage, and Andy Walls, Distinguished Engineer, chief hardware
architect for DS8000 and solid-state drives (SSDs), sat down to talk
about all of the new storage technologies IBM has been releasing lately.
What I didn’t have room for in the article was a nice summary of the
technologies that can help you improve access, manage growth, protect
data, reduce costs or reduce complexity. Whatever your goals, IBM has an
integrated storage option for every organization.
Here are the quick highlights of the latest storage announcements:
IBM Storwize V7000
New advanced software functions
New easy-to-use, Web-based GUI
RAID and enclosure RAS services and diagnostics
Additional host, controller and ISV interoperability
Integration with IBM Systems Director
Enhancements to Tivoli Storage Productivity Center (TPC), FlashCopy Manager (FCM) and Tivoli Storage Manager (TSM) support
Proven IBM software functionalities
Easy Tier (dynamic HDD/SSD management)
RAID 0, 1, 5, 6, 10
Storage virtualization (local and external disks)
Non-disruptive data migration
Global and Metro Mirror
FlashCopy up to 256 copies of each volume
IBM Storwize Rapid Application Storage Solution
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1 (with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
ProtecTier deduplication offers 25-to-1 reduction and online backup
In June, IBM debuted ProtecTIER* deduplication solutions
for AIX* and IBM i. ProtecTIER offers solutions to those who can’t
complete backup operations in a given window, have difficulty protecting
rapidly growing amounts of data or find their current backup
With data amounts growing, deduplication is becoming a vital part of
data management, backup and recovery. “One of the reasons ProtecTIER is
so crucial is because of the crazy growth the world is experiencing as
it moves to an all-digital environment,” says Victor Nemechek,
ProtecTIER deduplication offering manager at IBM. “Customers are finding
their data often doubles or more every year and their current backup
systems make it difficult to capture that data, protect it and restore
it when they need to.”
For backups many companies use tapes that load data quickly, but
present retrieval problems. These challenges—along with reliability
problems—sent customers to disk where data was more accessible, but also
expensive. Companies used disk for small portions of their most
critical data, and kept their other data on tape. “Even with disk for
critical data, backup is still an issue because you have a primary disk
that you store your data on and you have to have that much disk to back
up to, basically doubling your disk needs, and that can be very
expensive,” Nemechek says.
“Deduplication can squeeze 25 terabytes of data down to only
1 terabyte of physical disk, so customers can have the speed and
reliability of disk but without that one-to-one cost.” —Victor Nemechek,
ProtecTIER deduplication offering manager, IBM
by Steve Kenniston Alright, landed safe in Prague and was picked up by one of my
colleagues and whisked away to the IBM office. There we did an
interview with Czech writer Martin Noska from Computerworld for IDG in
Czech Republic. The first Noska informed me was that IBM is the number
one in storage sales in Czech Republic (just like Poland!). He also had
some very good questions and he with “What are IBM’s biggest challenges
in the storage business”? I had thought about this for a while and I
would have to say it is really about marketing our storage “solutions”
to the customer base. IBM is a double edge sword. IBM is so big and
has so many products it becomes difficult to market or message all of
our products without inundating all of our customers and confusing
them. If you think about it, IBM has hundreds of thousands of customers
and business partners, if not more. This is one of our strengths.
When customers have needs or requirements we have very good input into
our product portfolio, perhaps the best in the business. Combine this
with the fact that IBM has not only storage solutions but technology
across the entire stack from servers to networking. So when it comes to
developing the right technology, that solves real customer problems, I
would argue that IBM’s portfolio is the best in the business. IBM takes
an extreme amount of care when developing a solution to ensure that it
matches the customer requirements based on the changing needs of IT.
Having an integrated portfolio that works well with our ISV partners,
VMware for example, allows us to help customers speed their time to ROI
and be very competitive in the market place. The challenge is, how do
we properly message our new solutions to our customers, in a timely
manner so that they are well aware of new products without giving them
too much information such that it just becomes noise? It is difficult
to say the least.
The interview went very well. There were questions about tape, where
we discussed the advantages of IBM’s LFTS technology for more advanced
tape usage, we discussed the direction data deduplication will go as
well. Noska’s view was that there hadn’t been any advancement in data
deduplication in the last 5 years. I told him that for secondary
storage, backup, that he is right, I also told him that the real
advancement to deduplication will come when it is ready for primary
storage. Today deduplication isn’t ready for primary, but it will be
On Monday the 13th we traveled to visit Avnet. They are a
great IBM partner. Like most partners they have a very large SMB
install base and also like a lot of SMB feedback I have been getting,
they are looking for a building block solution that has all of the
software features implemented as a part of the stack. SMB and
Enterprise alike are starting to realize that the value in any array is
becoming the software stack that makes the hardware, efficient,
optimized, flexible, and dynamic. IT’s job continues to get more and
more challenging with developing strategic initiatives for the business
to make them more competitive and it is the job of the vendor to make
sure these solutions are as optimized and cost effective as possible.
We also visited DHL. These guys have one of the greatest datacenters
I have ever visited. They are very advanced and push a lot of data.
The do some very strategic logistics for a number of companies in Europe
and Asia. They, like many others have a number of challenges. Since
my blog post about “The 5 Most Interesting things at VMworld”
(#4) I heard something very interesting today. I asked “What is your
most challenging storage issue”? He told me that storage was not is
“most difficult” challenge. Storage efficiency was important to him in
order to keep driving down costs for his organization as they deliver a
service to the different groups that make up DHL, but his most difficult
challenge was with server I/O in his VMware environment. If you read
#4 in my post, regarding Proximal Data, this is exactly the issue the
address. As VM instances grow on the physical servers, the I/O starts
to become the big problem. DHL runs over 4000 instances of VMware and
as the business demands more applications and application resources,
they are bound by the I/O of the server, which also causes them to WAY
over provision their storage for performance reasons. This is very time
consuming, management intensive and expensive. The combination of a
solution like Proximal Data as well as compression can help them
optimize their infrastructure to save money and deliver better, more
cost effective services to their lines of business.
On the lighter side, I spend the weekend in Prague. What an amazing
city. The weather was fantastic and I was able to take a lot of great
photos. I walked around Prague Castle, ate some authentic Czech food,
visited the memorial for the Czech hockey players that passed in the
Russian plane crash and met some pretty interesting people. You can
check out some of my photos of Prague at www.facebook.com/skenniston.
Coincidentally the photo above shows the "Golden Lane" where the
Alchemists worked to turn anything they could find into gold in the city
Some items are just bound together: salt and pepper, a horse and
carriage, or even smoke and fire. While some may argue it’s hard to grow
a data center without adding cost and complexity, IBM begs to differ.
Its smarter approach to data storage means increased capacity goes hand
in hand with cost efficiency and ease of use. Capacity and Simplicity
Brocade is leading the way by helping
organizations around the world build cloud-optimized networks that increase
business agility and profitability. Offering robust, yet flexible network
solutions, Brocade enables organizations to choose the best type of cloud model
for their unique business requirements and objectives. Brocade is introducing
two industry-leading product line advancements that are catered towards your
customer's existing IT infrastructure.
Developments to the Data Center SAN Environment
Based on years of proven success, Brocade SAN
fabrics provide the most reliable, scalable, high-performance foundation for
private cloud architectures. Brocade continues that leadership with the
industry's first 16 Gbps Fibre Channel SAN solutions:
The Brocade® DCX® 8510 Backbone, the industry's most
powerful SAN backbone for private cloud storage
The Brocade 6510 Switch, the new price/performance leader in enterprise SAN
The Brocade 1860 Fabric Adapter, a new class of adapter that meets all your
customer's Fibre Channel/FCoE/IP connectivity needs in a single device
Brocade Network Advisor, an easy-to-use, unified network management platform
the NetIron® MLX
Brocade network solutions for service providers
combine high scalability and performance to transform your customer's business
with new revenue-generating cloud services—increasing their overall level of
profitability. Key offerings include:
New 10 GbE, 100 GbE, and advanced management modules for the Brocade MLX
Series of high-performance core routers
Compact Brocade NetIron CER 2000 Series routers, delivering high scalability
and performance at the network edge
Leading-edge enhancements in the Brocade NetIron 5.2 software release for
IPv6 scaling, broader MPLS connectivity, and more
Brocade Network Advisor, an easy-to-use, unified network management platform
World-class professional services and technical support for carrier-class
IBM System Storage TS7610 ProtecTIER Deduplication Appliance Express
The TS7610 is a powerful new addition to the IBM ProtecTIER
solution set, which brings the benefits of the reliability and
performance of disk-based data protection to mid-sized businesses who
need to ensure their backups are successfully completed in a timely
manner. The TS7610 brings the added benefit of inline data deduplication
which can squeeze up to 25TB or more into a single terabyte of storage.
The TS7610 also reduces costs (such as reducing downtime and time spent
managing and supporting systems) up to 45% over standard
non-deduplicated virtual tape library systems.
Systems combining block and file storage maximize benefits of server
The data center of the future
looks an awful lot like data centers of the past in one important respect:
storage demands. While the trend toward server virtualization and consolidation
is transforming the way data centers are being designed, built and managed,
rampant data growth continues to be a limiting factor.
In its annual “Digital
Universe” study, EMC projects a nearly 45-fold annual data growth by 2020. Data
growth was cited as the No. 1 data center hardware infrastructure challenge in a
recent Gartner survey of representatives from 1,004 large enterprises in eight
“While all the top data center
hardware infrastructure challenges impact cost to some degree, data growth is
particularly associated with increased costs relative to hardware, software,
associated maintenance, administration and services,” said April Adams, research
director at Gartner. “Given that cost containment remains a key focus for most
organizations, positioning technologies to show that they are tightly linked to
cost containment, in addition to their other benefits, is a promising
In order to drive down costs
and reduce operational complexity, organizations virtualizing their data centers
and beginning the journey to the cloud require a storage infrastructure that is
both simple and efficient. Unified storage delivers on both counts.
Unified storage is the
combination of block- and file-based storage in the same system with common
management. These multiprotocol systems can be attached to servers via IP and/or
The Road to
Unified storage is an evolving
technology, but not a new technology. A variety of vendors have taken stabs at
providing block- and file-oriented storage in a single box since the late 1990s.
Some of the earliest attempts involved simply putting two machines together in a
single enclosure and then creating a GUI to handle management of both.
Next came NAS gateways, which
used a NAS box as an entry to SAN storage. In this setup, a NAS box provides
file-based access to applications via a LAN port, and then stores the data on a
block-oriented storage array that can be accessed across the SAN. While this
approach accommodates both block and file protocols, it has some disadvantages.
One of the major problems is that data must be transferred twice — once across
the NAS Ethernet connection and again across the Fibre Channel or IP SAN — which
adds to I/O latency. Another issue is that the management of NAS gateways
continues to be separate from the management of SAN arrays.
More recent unified storage
platforms leverage virtualization technology to offer a much deeper integration
of file- and block-based storage. A file system performs I/O to disk blocks
using a common virtualized disk-volume engine. Virtualization allows
administrators to create a seamless pool of unified storage and enables
transparent data movement for tiered storage.
While NetApp introduced unified
storage to the market several years ago, it is now available from most storage
vendors. Many of these solutions include features such as data replication,
incremental snapshots and remote mirroring that contribute to robust business
Aligning Storage with
IT organizations face growing
pressure to transform the data center to meet increasing demands for wider
access to information, transactions and services. To a great degree, this means
creating a technology infrastructure composed of virtualized computing and
networking. By breaking the relationship between applications and the IT systems
on which they run, virtualization frees system administrators from providing
specific hardware with static configurations.
However, many organizations
have found that the benefits of virtualization are offset by increased storage
complexity and expense. For example, the creation of hundreds or even thousands
of virtual server image files often leads to massive storage waste. Because each
of these images is typically many gigabytes in size, the total storage required
in virtual environments can be 30 percent more than in an equivalent physical
environment. As a result, virtual machine sprawl increases operational overhead
and compromises storage utilization efficiency and overall business agility.
Unified storage improves
utilization by allowing organizations to consolidate and virtualize storage
across storage protocols, environments and mixed storage platforms. Combinations
of block storage (Fibre Channel or iSCSI) and file storage (NAS systems with
CIFS or NFS) can be managed via a common set of features such as snapshots, thin
provisioning, tiered provisioning, replication, synchronous mirroring and data
migration — all from a single user interface. This shift toward a shared
infrastructure enables organizations to achieve storage utilization rates of 85
percent or more, compared to the sub-50-percent rates in standalone storage
“IT managers are looking for
storage solutions that not only deliver immediate value, but also enable
flexibility and growth over time, so that storage can adapt to changes in an
organization's applications, user needs or business demands,” said Mark Peters,
senior analyst at Enterprise Strategy Group. “Storage solutions that are both
virtualized and unified are ideal to address the needs for both storage
flexibility and data growth.”
bySteve Kenniston History truly does repeat itself. We are talking about the history of
data storage. Every once and a while a new technology comes along that
requires a new way to think about infrastructure. Notice I said
“infrastructure”. I’d like to paint two analogies:
1: RAID – Prior to RAID users stored their data on disk and if they
could afford it, they backed that data up to have a protected copy of
their data. When RAID came out, users were able to store their data on
multiple disks appearing as one device. The benefits to this were,
increased data reliability, better performance. This new technology
however, fundamentally changed how disk was sold, but the questions were
How much capacity do you need?
What type of performance does your application require?
sales reps point of view changed. There were a number of new
considerations that needed to be taken into account. First, the age old
question, “Will I sell less storage “stuff?” Remember the person, at
the time, selling the disk was probably also selling the backup tape and
software to protect that information. If the disks are more reliable,
maybe the customer won’t need as much tape? Second, when the capacity
question came up, the seller also needed to know what type of RAID the
customer wanted to ensure they sold them enough drives. It was no
longer as simple as asking the capacity requirements and dividing it by
the drive capacity at the time. Now depending upon RAID levels there
was a new set of math that needed to be done. Third was the notion of
performance and more spindles meant more performance so now that the
capacity equation was solved for, you also needed to know the I/O
requirements in order to make sure the right number of drives were sold
to solve for the capacity as well as the performance.
what, we figured it out and the industry never looked back. RAID is a
defacto standard in all storage subsystems today, I even run RAID in my
home. The business benefits of having RAID far outweighed the costs.
In fact, it is probably one of the first times in storage history that
the question of, “how can you afford not to have it”, came up.
2: Virtual Machines – When VMware came out the value proposition was,
do more work, with less physical infrastructure. And again, the
business benefits far outweighed the technology hurdle of implementing
the new solution.
in mind that it is much harder to change process in IT than it is to
change technology, IT decided that this new way of serving up processing
power to applications was well worth all of the process changes that it
would require. One example, backup would need to change when
implementing virtual server technology. The data would grow 4x and the
processing of that information for backup would take longer, in a world
where time was all to valuable. However the business benefit justified
Again, the sellers questions were consistent:
How many virtual servers do you need? (Capacity)
What type of performance do you need for each virtual server?
answers to these questions allowed a sales rep to configure the right
number of physical systems to handle the right number of systems to make
the line of business successful. Additionally, some of the same
considerations came up. “Will I sell less server and make less money?”
Now that there was new server technology (more processors, the ability
to handle more memory) systems could be bigger, and more expensive.
Sellers also needed to know a bit more about “capacity”, how many
virtual systems could a physical system run successfully? They also
needed to have an understanding of performance. Now sellers were
configuring systems to run the equivalent of 20 to 100 servers on one
Today I would suggest that we are at a cross roads in history. New technology has come along that will have asignificantimpact
on the storage world. First, research from IBM reflects the fact that
disk drives can no longer keep getting two times as dense for half the
cost as they had been throughout the late 90’s and early 2000’s. The
technology doesn’t exist today to make the drives spin faster, stay cool
and not loose data. Until now. Real-time compressionis
a game changing technology that will add significant value to the
storage industry without having to change the way IT thinks about the
deployment of their storage.
is growing at such a significant pace today and with the latest IBM
research about disk capacities, something needs to change. Data centers
are just running out of space and more customers want to keep more data
on line for reasons such as competitive edge or compliance, but no
matter the reason, they want access to their information. Enter
real-time compression. Now there is a fundamental difference between
real-time compression and other compression technologies and compression
implementations but I am not going get into it here, but it is safe to
say that post process and in-line compression are very different than
real-time compression and users can’t get the benefits of improved
primary storage capacity, transparently, with no performance impact with
anything but real-time compression technology.
real-time compression, like other game changing technology, doesn’t
require any new questions; there are just simply a new set of math
How much capacity is required?
What is the performance requirement?
time, real-time compression will be as ubiquitous as RAID, and just
like users don’t think that much about RAID, users won’t need to think
about compression. Compression will become an expected feature of the
array. It doesn’t matter that it now takes fewer drives to satisfy the
original question around capacity and performance. With data growing as
fast as it is and with disks not being able to keep up their growth
pace, something needs to change and that something is real-time
compression. Soon, it won’t matter what the physical disk capacity is
of a disk drive, it will be about a disks virtual disk capacity, what it
has the capability of storing that matters. It is time we all started
thinking this way.
"There’s a battle going on between CEOs and their IT organizations. The CEO is saying “hey
– I go home on the weekends, my kids are on Facebook storing pictures
and videos for free, Gmail is always on, this new Web stuff is cheap and
simple, I can get to these services from any device, Amazon is selling
compute and storage for peanuts – why am I spending so much on
IT?—Outsource the lot to the cloud!”
IT’s response? “Uh oh – we’re gonna get squeezed. We need to: Virtualize. Simplify. Do more with less. Cut the fat. Increase responsiveness.”
Technology companies, seeing the pickle their best customers are in,
the threat to their business and a way to compete, are responding with
VMware and Hyper-V integration, thin provisioning, automated tiering,
compression, data deduplication; plenty of marketing too – “to the cloud.”
And the last year or so has brought lots of high profile M&A, aimed
directly at filling portfolio gaps for areas like unstructured data and
simplifying IT (Data Domain, 3PAR, Compellent, Isilon, Storwize,
Ocarina, etc). It kind of reminds me of the Three Stooges a little bit –
“to the hunt” –
lots of action but I can’t help wonder if the big IT vendors really
know where they’re going with this over the long haul.
What I mean is that business is good right now. The market’s up;
demand looks solid; everyone seems happy. But there’s a big change
coming. We’ll look back five years from now and the gains being made in
the data center will be ancient history. CEOs will be happy for a while
that CIOs are reducing costs. They’ll keep taking down IT as a
percentage of revenue. But CEOs are greedy and we all know they’ll want
more; much more. It’s why smart people like Paul Maritz say that VMware
needs to move beyond cost cutting into delivering deeper business
integration and more substantial value. "
IBM's Ed Walsh, Director of Storage Efficiency sits down with Steve Duplessie, Founder of ESG to talk about how IBM Real-time Compression sets the bar for doing storage optimization in NAS. At the end of the day, if you can do compression in real time, without sacrificing performance and the transparency of the implementation, then why wouldn't you - given the savings you can get over traditional compression.
We all know compression is not new and it is coming as a standard feature in a number of storage systems. The issue is, each of these technologies has a significant impact on performance - both primary storage performance as well as the performance on all of the back end operations such as backups, replication etc...
IBM's Real-time Compression doesn't have any of these limitations - listen to Ed to hear more.