Two solid days at VMworld 2011 and I got to do and see a lot. Here is a breakdown of the top 5 things I saw at VMworld.
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they
were, larger than life. The SiliconAngle / Wikibon Cube broadcasting
live from VMworld2011. Guests that were on the cube included, Tom
Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson
(VMware) as well as many more. The Cube also had 12 Industry
Spotlights. The most interesting spotlight had to do with Storage
Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV
live over the internet, the Cube has broadcast from a number industry
shows and user conferences. The great part about this, it is like the
ability to watch a sporting event being covered by ESPN but for tech.
The Cube brings all of the highlights of these events right into your
computer screen. Now if you can’t make an event, no problem, you can
catch all the most important messages from the Cube. The Cube is now
the new mechanism for delivering content to users in the way they want
to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes
Dave Vellante and his co-host John Furrier tee up the concept. They
discussed storage optimization, where it has come and were it is going,
especially in VMware environments. We are hearing more and more about
storage efficiency technologies. During the next 15 minutes Dave and I
discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression
technology is not only the most efficient and effective compression
technology in the industry; we also learned that IBM really acquired not
just a real-time “compression” technology but a platform that can do a
number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a
VMware environment due to the fact that virtualizing servers was
successful for the server side of the house but it didn’t do all it set
out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage
side of the house. Users have told us that when they virtualize their
servers, storage grows as much as 4x. By leveraging the right storage
optimization technologies together, users can get their budgets back
under control and also deliver the promise that server virtualization
set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley
(HPsisyphus, formally @3ParFarley) Dave asked us what was the most
interesting thing we saw on the show floor while walking around. I
didn’t hesitate in my response. There were two in my mind. First, it
couldn’t be any more obvious at how fast data is growing. Over 50% of
the 19,000 people there had cameras taking pictures and taking video.
That data is going to be stored somewhere. Additionally, they had these
cameras for a reason. Either we have more bloggers and tweeters than
we know about, more marketing people are going to these events or more
people are using social media to inform and educate others. The way in
which users want to receive data is always changing and evolving, and at
least at VMworld 2011 we were delivering content in a number of ways
especially photos and video. All that data will end up in the “cloud”
The second thing I noticed was the amount of free time VMware has
given back to the IT user. I heard, on more than one occasion, end
users talking about family, vacations and travel instead of the usual
banter about how challenging their jobs are and the issues they have
with their vendors which is the normal think I hear at these shows.
This was not an anomaly. I am chalking it up to the fact that VMware
makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE
most interesting thing I saw at the show. Now Proximal Data just came
out of “stealth” in early August. They didn’t have a booth at VMworld
but they did have a “whisper suite”. So, I have to confess, since I
used to be an analyst, sometimes people will ask me to come take a look
at their technology and their message to see if it is in line with what
is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and
growing topic with VMware these days, the I/O bottle neck on virtual
servers, and they solve this problem in a very unique and intelligent
First, the problem. One of the issues facing VMware today is the
number of virtual machines that can be hosted by one physical machine.
The more users can get on one system, the more efficient they can be.
The problem is, today systems are running into I/O workload bottlenecks
that are causing a limitation in the number of virtual machines one
system can run.
One way to solve this problem is add more memory to the host but that
could be very very expensive. You can add more HBA’s or NIC cards but
that can be expensive and also difficult to manage. You can add more
flash cache to your storage to improve the I/O bottleneck but doing that
only solves ½ the problem, you still need to solve the challenge on the
host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management
software capabilities combined with PCI flash cards on the host, for a
very reasonable price per host. The software combined with the card is
100% transparent to both the virtual servers and to the storage, which
to me is one of the most important features of the implementation.
Transparency is the key to any new technology. IT has a ton of
challenges and has done a great deal of work to get their environment to
where it is today. To implement a technology that causes all of that
work to be undone is very painful. Remember, the hardest thing to
change in IT is process, not technology. It’s important to preserve the
process. That is what Proximal Data does. Proximal Data can increase
the I/O capability of a VMware server with just a 5 minute installation
of the PCI card and their software. This technology can double and even
triple the number of virtual machines on any physical server and that
is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however
Proximal does it transparently with no agents making it the most user
friendly implementation. While these guys won’t have product until
2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are
talking more and more as well as investing in public / private cloud.
There are more systems popping up that have servers, networks, high
availability and storage all in one floor tile. These systems are
designed to integrate, scale, manage VM’s simply, increase productivity
and ease the management of all possible application deployments in any
business. Additionally these boxes help you to connect to the cloud to
ease the cost burden. Is the pendulum swinging back to the “open
systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a
potential OEM prospect of the IBM Real-time Compression IP. I have
always said that this technology could revolutionize the data storage
business much like VxVM did for Veritas many years ago. Creating a
standard way to do compression across a number of system can help users
with implementation as well as ease the storage cost burden. I hope
this moves forward and I hope more folks step up who want to OEM the
by Steve Kenniston The first city on my Eastern European trip was Moscow. I think the
traffic here is worse than the 101 in Silicon Valley during the dot com
era. That said, it was a great visit. I spoke at the Information
Infrastructure Conference at the Swissotel convention center in Moscow.
It was the first time I spoke to a group of people with an
interpreter. It was like being at the UN. The two main topics were
Storage Efficiency and Real-time Compression.
I spoke with a few customers and the press and in dealing with the
data growth challenges they wanted to know, “When it comes to big data,
what is next, is it ‘huge data’”? Data growth clearly a concern.
Interesting enough though most of the questions, came around my title of
“Evangelist”. One report told me, “if an Evangelist is ‘preaching the
word of storage’ then why not just call yourself an Apostle”? How do
you think that would look on an IBM business card: Global Storage
The next day I did a day of “sales enablement” in the Moscow office.
We discussed mostly how to sell and position Real-time Compression and
what is next for the technology. I was very impressed with the team.
They were very technical and knew quite a bit about Real-time
Compression and really wanted to know in more detail how the technology
was invented. This means they are really talking about the technology
and the customers are drilling down into the next level of detail.
There are a lot of good opportunities for the technology in Moscow and I
look forward to hearing more about the success of Real-time Compression
I didn’t have a lot of time to sight see but I did make it to Red
Square. You can actually buy a beer outside in Red Square and walk
around. So I did. I took a few photos and then as the US was getting
going, I had some work calls to attend to. That evening I spent on the
34th floor of my hotel having dinner. It was a great view of Moscow. I hope to come back.
by Steve Kenniston After landing in Warsaw, I got into a car with the local sales leader
for Poland and we drove to the event location. It was a 2 hour drive.
First, the roads and the land in Poland reminded me very much of my
home time in Maine. Very scenic and rural but beautiful and peaceful.
We talked storage for 2 hours and I am always festinated at the thirst
for knowledge there is when I travel. It was a great ride followed up
by a customer reception and some local Polish brew.
Thursday I spent the day in Sterdyn, Poland for IBM Storage
University. There were 30 customers at the event and it went very very
well. The event was at Palac Ossolinski,
today used as an event center but has a very rich history, in fact at
one point it was used as a medical facility in WWII. The photo is of
the building where we had the event. The topics we covered were:
The customers were very interactive and provided a lot of insight to
their environments. Interestingly enough I learned during our customer
reception that IBM storage is #1 in Poland with HP second and EMC
third. This is a true testament to the IBM sellers and the customers
who use the IBM products every day to drive their business. I also
learned that the data break down in Poland is 90% block, 10% file which I
found interesting and would be interested to check back 12 months from
today to see how it will be different.
I did learn something very interesting in Poland. The question was
asked “Why XIV”? What is so special about XIV. The answer was
awesome. The answer started with 2 questions:
1) How old is RAID?
2) How old is your iPhone?
The reality is data growth is out pacing what traditional RAID can
handle and data profiles are changing as well. These combined have
driven new technologies like Cleversafe, Cloud Computing, Hadoop and
XIV. Just like the iPhone is a new approach to the smart phone based on
new things we know about how these smart phones are being used, we know
more about how data and storage is being used. New ways to deliver
capacity and performance are needed in order to keep up with the
changing times. I thought it was a very good answer in terms that make
Thursday evening I traveled back to Warsaw where I got in a bit late
and just went to a local pub, Sketch. Grabbed a small bite and some
local mead and then headed back to the hotel. I did get to see the
local Palace of Culture and Science in the middle of Warsaw, very
impressive, built as a gift from Russia to Poland.
I have an early flight to Prague. I am very excited about this part
of the journey as I have always wanted to travel to Prague. Press
meeting right when I land. Stay tuned.
by Steve Kenniston Alright, landed safe in Prague and was picked up by one of my
colleagues and whisked away to the IBM office. There we did an
interview with Czech writer Martin Noska from Computerworld for IDG in
Czech Republic. The first Noska informed me was that IBM is the number
one in storage sales in Czech Republic (just like Poland!). He also had
some very good questions and he with “What are IBM’s biggest challenges
in the storage business”? I had thought about this for a while and I
would have to say it is really about marketing our storage “solutions”
to the customer base. IBM is a double edge sword. IBM is so big and
has so many products it becomes difficult to market or message all of
our products without inundating all of our customers and confusing
them. If you think about it, IBM has hundreds of thousands of customers
and business partners, if not more. This is one of our strengths.
When customers have needs or requirements we have very good input into
our product portfolio, perhaps the best in the business. Combine this
with the fact that IBM has not only storage solutions but technology
across the entire stack from servers to networking. So when it comes to
developing the right technology, that solves real customer problems, I
would argue that IBM’s portfolio is the best in the business. IBM takes
an extreme amount of care when developing a solution to ensure that it
matches the customer requirements based on the changing needs of IT.
Having an integrated portfolio that works well with our ISV partners,
VMware for example, allows us to help customers speed their time to ROI
and be very competitive in the market place. The challenge is, how do
we properly message our new solutions to our customers, in a timely
manner so that they are well aware of new products without giving them
too much information such that it just becomes noise? It is difficult
to say the least.
The interview went very well. There were questions about tape, where
we discussed the advantages of IBM’s LFTS technology for more advanced
tape usage, we discussed the direction data deduplication will go as
well. Noska’s view was that there hadn’t been any advancement in data
deduplication in the last 5 years. I told him that for secondary
storage, backup, that he is right, I also told him that the real
advancement to deduplication will come when it is ready for primary
storage. Today deduplication isn’t ready for primary, but it will be
On Monday the 13th we traveled to visit Avnet. They are a
great IBM partner. Like most partners they have a very large SMB
install base and also like a lot of SMB feedback I have been getting,
they are looking for a building block solution that has all of the
software features implemented as a part of the stack. SMB and
Enterprise alike are starting to realize that the value in any array is
becoming the software stack that makes the hardware, efficient,
optimized, flexible, and dynamic. IT’s job continues to get more and
more challenging with developing strategic initiatives for the business
to make them more competitive and it is the job of the vendor to make
sure these solutions are as optimized and cost effective as possible.
We also visited DHL. These guys have one of the greatest datacenters
I have ever visited. They are very advanced and push a lot of data.
The do some very strategic logistics for a number of companies in Europe
and Asia. They, like many others have a number of challenges. Since
my blog post about “The 5 Most Interesting things at VMworld”
(#4) I heard something very interesting today. I asked “What is your
most challenging storage issue”? He told me that storage was not is
“most difficult” challenge. Storage efficiency was important to him in
order to keep driving down costs for his organization as they deliver a
service to the different groups that make up DHL, but his most difficult
challenge was with server I/O in his VMware environment. If you read
#4 in my post, regarding Proximal Data, this is exactly the issue the
address. As VM instances grow on the physical servers, the I/O starts
to become the big problem. DHL runs over 4000 instances of VMware and
as the business demands more applications and application resources,
they are bound by the I/O of the server, which also causes them to WAY
over provision their storage for performance reasons. This is very time
consuming, management intensive and expensive. The combination of a
solution like Proximal Data as well as compression can help them
optimize their infrastructure to save money and deliver better, more
cost effective services to their lines of business.
On the lighter side, I spend the weekend in Prague. What an amazing
city. The weather was fantastic and I was able to take a lot of great
photos. I walked around Prague Castle, ate some authentic Czech food,
visited the memorial for the Czech hockey players that passed in the
Russian plane crash and met some pretty interesting people. You can
check out some of my photos of Prague at www.facebook.com/skenniston.
Coincidentally the photo above shows the "Golden Lane" where the
Alchemists worked to turn anything they could find into gold in the city
After a full first day at VMworld, I started to think more about IBM and their technology solutions that help customers in a VMware environment. Here is a top ten list of things to consider when looking at a VMware implementation and how IBM can help.
VMware is playing Switzerland and ensuring all vendors are on a level playing field, so when other vendors state that they have “better” or “closer” technology integration than other vendors its probably not true. Some vendors may not choose to integrate with certain things, but rest assured, all of VMware’s APIs are open to all vendors. Take a look and see how IBM is providing plug-ins for vSphere, SRM, and VAAI in XIV as well as other storage platforms.
#2 Ease of Use
IBM has seen, firsthand, a number of our customers switch from one platform to XIV because of their pleasure in the simplicity of the XIV solution. A large manufacturer is one example of a customer who is provisioning new VMware instances in less than five minutes.
One XIV customer, who is a very experienced storage administrator, saw the XIV GUI and quoted"I don't get it (XIV GUI). It can't be that easy. Either I'm missing something or they are not showing me everything." The reality is, it is that easy and that interface is prolific throughout the IBM storage portfolio including the Storwize V7000 and SVC.
#3 Storage Efficiency
Probably one of the most important topics this year is Storage Efficiency and IBM is a leader in this department. The Storwize V7000 utilizing compression or N-Series with the Real-time Compression appliance can reduce the VMware storage footprint up to 75%. Users tell us that by implementing VMware, their storage footprint has grown by as much as 4x. Therefore their overall IT budgets didn’t get better, the dollars just shifted from servers to storage. IBM’s Real-time Compression users can save up to 75% without any performance impact. Additionally, Real-time Compression is the only compression technology that works in conjunction with deduplication, compressing the data before it is dedplicated, giving an added benefit to the technology.
Now users have the opportunity to get their overall IT budget back under control.
#4 Data Protection
The reality here is that many enterprises are waiting for the war to be fought out between the vendors in this space, or looking to embedded snapshots and disk based technologies with mirroring to cut out all of the host based challenges with data protection.
A report by Taneja Group, sponsored by multiple clients, suggests that the biggest issue in virtual environment is data protection as many enterprise do not know what they need to do and they are looking at their current vendors to provide solutions. So work closely with the IBM team and leverage all of the work that IBM has done with Tivoli and VMware to help solve your data protection challenges.
A lot of folks like to talk about deduplication when it comes to VMware, just make sure it is implemented properly and at the right place. ProtecTIER has as good a deduplication ratios and great performance.
I am not sure how you get more flexible from IBM. From hardware to software to services to partners, IBM offers solutions across a wide spectrum. Whether it be hardware solutions that can meet a range of performance requirements and application types, to software that can help users analyze their data more effectively. IBM can also deliver all of these solutions through our relationships with or ISVs as well as partners offering superior flexibility.
When it comes to high availability in storage, it is hard to beat the new V7000 or the XIV product. Innovatively designed specifically high availability, users can move to a virtualized storage platform such as XIV and users can see the real-world of availability and reliability that does not sacrifice performance in any of their applications.
With IBM XIV, you can simple scale as you need to and automatically and take advantage of new capacity and linear performance improvements as well as managing the entire enterprise from a single, easy to use GUI.
Also, with Real-time Compression, you now have the added benefit of putting more capacity in your existing footprint to do even more analytics while saving on footprint, power and cooling – all in real-time.
#8 Services / Solutions
IBM is the worldwide leader in providing services. IBM is the largest OEM of VMware solutions on the planet and provides support and services in 170 countries around the globe. IBM’s Global Services team has architected and installed hundreds, if not thousands of VMware implementations, helping customers go from a non-virtualized to a virtualized world. IBM, as well as its partners, can help migrate customer's virtualized environment without a long outage and maintain application and customer production as well as move to Thin Provisioning, and a truly virtualized platform not Vblocks and a coalition.
#9 TCO / ROI
IBM offers great solutions that reduce the risk, cost, and complexity of the virtualized world. IBM focuses on the real-world customer challenges. Customers have been hit hard these last few years when it comes to budgets in order to manage their IT environments. We keep helping our customers do more with less by enabling a more efficient storage platform than any other vendor. IBM XIV, V7000, N-Series, SVC and ProtecTIER solutions are great fit for solving difficult VMware challenges and we have real-world references to prove it.
#10 100 Years of Innovation
The bottom line: there is always more to do, IT changes at a rapid pace and it is the vendors job to keep up with the needs of its customers. IBM has been doing this for 100 years and we will continue to do so.
Brocade Unlocks the Power of the Cloud Through Open, Multi-Vendor Virtual Compute Blocks
Brocade and Its Partners Help Customers Build the Next Generation of Distributed and Virtualized Data Centers in a Simple, Evolutionary Way
LAS VEGAS, NV-- (MARKET WIRE) --08/30/11--(VMworld 2011) --Today at VMworld,Brocade(NASDAQ: BRCD), the leader infabric-baseddata center architectures, today announced significant advancements to the Brocade®CloudPlex™ architecturewith new Brocade Virtual Compute Blocks. These bundled solutions consist of integrated, tested and validated multi-vendor server, virtualization, networking and storage resources. Demonstrating substantial partner traction, the new solutions are available today, delivered and supported in collaboration with a wide range of alliance partners, includingDell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
This open approach is an underlying tenet of the Brocade CloudPlex architecture, which was announced inMay 2011. The open, extensible framework is designed to help customers build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. It is the foundation for integrated compute blocks and it supports existing multi-vendor infrastructure to unify customers' assets into a single compute and storage domain.
"Organizations are seeking to maximize the benefits of cloud computing through more efficient infrastructure procurement, pre-integrated components, faster support response, and greater choice in best-in-class products to meet specific business needs," saidJohn McHugh, CMO of Brocade. "Brocade Virtual Compute Blocks leverage our Ethernet fabrics and industry-leading Fibre Channel SAN fabrics to allow our partners to create integrated stacks that optimize cost effectiveness, flexibility and performance. Because these solutions are open, they allow our customers to scale components independently and better utilize legacy infrastructures."
According to IDC research, "As organizations move to create a dynamic data center enabled by virtualization, they are moving to architectures where server, storage, and network assets are in tighter alignment into converged infrastructures. IDC defines a converged infrastructure as one in which the server, storage, and network infrastructure resources are treated as pools to be assigned as needed to business services... The top benefits organizations achieve by implementing a converged infrastructure are cost savings, simplified management, better availability, increased flexibility, and higher utilization."(1)
Brocade Virtual Compute Block Partner Solutions Brocade Virtual Compute Block solutions include hypervisor software integrated with servers, storage and Brocade fabric networking products in bundled, pre-racked and pre-tested configurations enriched by technology from Dell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
Dell Brocade and Dell have partnered to develop a reference architecture that includes Dell Compellent Fibre Channel storage, Dell PowerEdge servers, Brocade data center and SAN switches and the VMware hypervisor, which is being shown at the Brocade VMworld booth.
"Our reference architecture developed with Brocade demonstrates Dell Compellent's commitment to provide open, cloud-optimized solutions for our customers' increasingly dynamic requirements in Fibre Channel environments," saidPhil Soran, president of Dell Compellent. "Enterprises that deploy this reference architecture benefit from the ability to scale virtualization with their business requirements while deploying industry-leading storage from Dell Compellent and Fibre Channel networking solutions from Brocade."
EMC EMC and Brocade have joined forces with several partners to deliver Virtual Compute Blocks, which combine VMware virtualization software and management tools, EMC® VNXe™ unified storage, servers and integrated Brocade Fibre Channel and Ethernet fabric networking technologies. EMC and Brocade are now working with Arrow, Tech Data, First Distribution and Acao to deliver Virtual Compute Blocks in the U.S., and in parts ofEurope,Africa, andSouth America. These integrated, easy-to-install solutions enable EMC customers to quickly deploy private and hybrid cloud infrastructures, which provide data center consolidation, availability, scalability and automation.
"Our integration work with Brocade is a key enabler for our resellers in providing simplified deployment of Virtual Compute Blocks and further demonstrates our commitment to delivering cloud infrastructure solutions for our mutual customers that help transform data centers into highly efficient and agile environments," saidJosh Kahn, vice president of Solutions Marketing at EMC.
Fujitsu Fujitsu and Brocade have partnered to create solutions supporting Fujitsu's Dynamic Infrastructures architecture, which will help enterprises boost business agility, efficiency and IT economics. These are designed for data centers of the future, delivering powerful automated pools of computing resources made up of server, storage, network and virtualization technology.
"Fabric-based networks are an important requirement to successful deployments of solutions that will enable our customers to accelerate their cloud-based IT initiatives," saidJens-Peter Seick, senior vice president of theProduct Development GroupatFujitsu Technology Solutions. "We are pleased to add Brocade Ethernet fabric technologies to our portfolio, which enhances the long-term partnership we have had in deploying SANs for our customers' virtualized environments."
Hitachi Data Systems Hitachiconverged data center solutionscombine storage, compute and networking, with software management, automation and optimization to automate, accelerate and simplify cloud adoption. As a key networking partner, Brocade provides networking solutions for Hitachi converged data center solutions, including Ethernet switch, Fibre Channel fabric data center switches, and Fibre Channel switch modules for the Hitachi Compute Blade family. Solutions include:
Hitachi solutions built on Microsoft Hyper-V Cloud Fast Track: A combination of Hitachi storage and compute, with Brocade networking and Microsoft Windows Server 2008 R2 with Hyper-V andSystem Centerfor high-performance private cloud infrastructures and an avenue for further automation and orchestration.
Hitachi Unified Compute Platform: An open and converged platform that provides orchestration and management within the portfolio of Hitachi converged solutions for automated dynamic management of servers, storage and networking to create business resource pools from a simple, yet comprehensive interface.
Hitachi Converged Platform for Microsoft Exchange 2010: The first in a portfolio of pre-tested application-specific converged solutions, engineered for rapid deployment and tightly integrated with Exchange 2010's powerful new features for resilience, predictable performance and seamless scalability.
"HDS and Brocade have partnered to deliver tested and proven solutions with tightly integrated storage, compute and networking products that allow our mutual customers to benefit from Ethernet switch and Fibre Channel fabric technologies to create flexible cloud-based infrastructures," saidAsim Zaheer, vice president of Corporate and Product Marketing atHitachi Data Systems. "Through quicker deployment, automation and scalability, Hitachi converged data center solutions help organizations adopt cloud at their own pace and see predictable results and faster time to value."
VMware VMware and Brocade have developed a reference architecture solution that enables organizations to create a scalable virtual desktop infrastructure (VDI) environment.
The VMware/Brocade VDI reference architecture,VMware View™, combines Brocade VDX data center switches and converged network adapters, Intel x-86-based rack servers, iSCSI-based storage and TrendMicro security software.
Benefits of the VMware/Brocade VDI solution include best-in-class performance and scalability, enhanced security, ease-of-migration and lower total cost of ownership.
"VMware and Brocade have collaborated on a joint VDI solution that addresses our customers' needs to improve business productivity though increased performance, secured client access and elimination of business disruptions," saidVittorio Viarengo, vice president of End-User Computing at VMware. "IT organizations can utilize our reference architecture to deploy a quick-start configuration within their data center or at remote locations. In addition, it can be used as a test or development platform for businesses eager to gain the benefits and advantages of virtualizing user desktops."
Avnet Virtual Compute Block Solutions Separately today at VMworld, Brocade and Avnet announced the joint development of marketing and enablement support for a new set of multi-vendor, pre-tested and configured virtualization solutions. The first of these is a reference architecture and validated solution designed to cost effectively scale virtual desktop infrastructure (VDI) environments to support thousands of clients (or desktops) per solution bundle. The VDI bundle will help Avnet reseller partners design and deploy open, efficient and scalable virtualization solutions for their end customers by incorporating Brocade and VMware networking and hypervisor technologies in conjunction with a variety of compute and storage platforms.
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, DCX, Fabric OS, andSAN Healthare registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
VMware, VMware View and VMworld are registered trademarks and/or trademarks of VMware, Inc. inthe United Statesand/or other jurisdictions. The use of the word "partner" or "partnership" does not imply a legal partnership relationship between VMware and any other company.
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 --
Brocade (NASDAQ: BRCD) today announced that FleetCor,
a leading independent global provider of specialized payment products
and services to businesses, commercial fleets, major oil companies,
petroleum marketers and government fleets, has selected Brocade as the
vendor to build its cloud-optimized
network. This new network enhances FleetCor's ability to securely
process millions of transactions monthly and ultimately better serve its
commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor
cardholders worldwide, and they are used to purchase billions of gallons
of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help
evolve its data center and IT operations into a more agile private cloud
infrastructure. Brocade® cloud-optimized networks
are designed to reduce network complexity while increasing performance
and reliability. Brocade solutions for private cloud networking are
purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we
looked at market leadership and non-stop access to critical data," said
Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade
cloud-optimized networking solutions are perfect for our data centers
because they allow us to optimize applications faster, virtually
eliminate downtime and help us meet service level agreements for our
customers. Moving to a cloud-based model also provides us the
flexibility to make adjustments on the fly and access secure information
virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data
centers, citing scalability as a major driver for the purchase. This
approach enables FleetCor to virtualize its geographically distributed
data centers and leverage the equipment it already has, at the highest
level, to achieve maximum return on investment. The Brocade MLXe
provides additional benefits for FleetCor by using less power and has a
smaller footprint than competitive routers; critical in power-and
space-constrained locations in order to allow for growth. The Brocade
MLXe also enables continuous business operation for FleetCor based on
Multi-Chassis Trunking, massive scalability supporting highest 100 GbE
density in the industry with no performance degradation for advanced
features like IPv6 and flexible chassis options to meet network and
The Brocade ServerIron ADX
Series of high-performance application delivery switches provides
FleetCor with a broad range of application optimization functions to
help ensure the reliable delivery of critical applications.
Purpose-built for large-scale, low-latency environments, these switches
accelerate application performance, load-balance high volumes of data
and improve application availability while making the most efficient use
of the company's existing infrastructure. It also delivers dynamic
application provisioning and de-provisioning for FleetCor's highly
virtualized data center, enables seamless migration and translation to
IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers
FleetCor has eliminated thousands of costly networking cables, saving
it hundreds of thousands of dollars and allowing the company to segment,
streamline and secure its network. FleetCor has also been able to
easily integrate Brocade network technology with third-party offerings
already installed in the network, for complete investment protection.
FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for
its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in
each of our data centers to help us leverage the benefits of cloud
computing and the Brocade MLXe delivered on all fronts," said Keirbeck.
"By virtualizing our data center, Brocade allows for non-stop access to
the mission-critical data that FleetCor and its customers rely on every
day. We chose the Brocade MLXe because of the tremendous results we
already saw from our existing Brocade solutions and the exceptional
support and service."
According to a report from analyst firm Gartner, "Although 'economic
affordability' is an immediate, attractive benefit, the biggest
advantages (of cloud services) result from characteristics such as
built-in elasticity and scalability, reduced barriers to entry,
flexibility in service provisioning and agility in contracting."(1)
bySteve Kenniston History truly does repeat itself. We are talking about the history of
data storage. Every once and a while a new technology comes along that
requires a new way to think about infrastructure. Notice I said
“infrastructure”. I’d like to paint two analogies:
1: RAID – Prior to RAID users stored their data on disk and if they
could afford it, they backed that data up to have a protected copy of
their data. When RAID came out, users were able to store their data on
multiple disks appearing as one device. The benefits to this were,
increased data reliability, better performance. This new technology
however, fundamentally changed how disk was sold, but the questions were
How much capacity do you need?
What type of performance does your application require?
sales reps point of view changed. There were a number of new
considerations that needed to be taken into account. First, the age old
question, “Will I sell less storage “stuff?” Remember the person, at
the time, selling the disk was probably also selling the backup tape and
software to protect that information. If the disks are more reliable,
maybe the customer won’t need as much tape? Second, when the capacity
question came up, the seller also needed to know what type of RAID the
customer wanted to ensure they sold them enough drives. It was no
longer as simple as asking the capacity requirements and dividing it by
the drive capacity at the time. Now depending upon RAID levels there
was a new set of math that needed to be done. Third was the notion of
performance and more spindles meant more performance so now that the
capacity equation was solved for, you also needed to know the I/O
requirements in order to make sure the right number of drives were sold
to solve for the capacity as well as the performance.
what, we figured it out and the industry never looked back. RAID is a
defacto standard in all storage subsystems today, I even run RAID in my
home. The business benefits of having RAID far outweighed the costs.
In fact, it is probably one of the first times in storage history that
the question of, “how can you afford not to have it”, came up.
2: Virtual Machines – When VMware came out the value proposition was,
do more work, with less physical infrastructure. And again, the
business benefits far outweighed the technology hurdle of implementing
the new solution.
in mind that it is much harder to change process in IT than it is to
change technology, IT decided that this new way of serving up processing
power to applications was well worth all of the process changes that it
would require. One example, backup would need to change when
implementing virtual server technology. The data would grow 4x and the
processing of that information for backup would take longer, in a world
where time was all to valuable. However the business benefit justified
Again, the sellers questions were consistent:
How many virtual servers do you need? (Capacity)
What type of performance do you need for each virtual server?
answers to these questions allowed a sales rep to configure the right
number of physical systems to handle the right number of systems to make
the line of business successful. Additionally, some of the same
considerations came up. “Will I sell less server and make less money?”
Now that there was new server technology (more processors, the ability
to handle more memory) systems could be bigger, and more expensive.
Sellers also needed to know a bit more about “capacity”, how many
virtual systems could a physical system run successfully? They also
needed to have an understanding of performance. Now sellers were
configuring systems to run the equivalent of 20 to 100 servers on one
Today I would suggest that we are at a cross roads in history. New technology has come along that will have asignificantimpact
on the storage world. First, research from IBM reflects the fact that
disk drives can no longer keep getting two times as dense for half the
cost as they had been throughout the late 90’s and early 2000’s. The
technology doesn’t exist today to make the drives spin faster, stay cool
and not loose data. Until now. Real-time compressionis
a game changing technology that will add significant value to the
storage industry without having to change the way IT thinks about the
deployment of their storage.
is growing at such a significant pace today and with the latest IBM
research about disk capacities, something needs to change. Data centers
are just running out of space and more customers want to keep more data
on line for reasons such as competitive edge or compliance, but no
matter the reason, they want access to their information. Enter
real-time compression. Now there is a fundamental difference between
real-time compression and other compression technologies and compression
implementations but I am not going get into it here, but it is safe to
say that post process and in-line compression are very different than
real-time compression and users can’t get the benefits of improved
primary storage capacity, transparently, with no performance impact with
anything but real-time compression technology.
real-time compression, like other game changing technology, doesn’t
require any new questions; there are just simply a new set of math
How much capacity is required?
What is the performance requirement?
time, real-time compression will be as ubiquitous as RAID, and just
like users don’t think that much about RAID, users won’t need to think
about compression. Compression will become an expected feature of the
array. It doesn’t matter that it now takes fewer drives to satisfy the
original question around capacity and performance. With data growing as
fast as it is and with disks not being able to keep up their growth
pace, something needs to change and that something is real-time
compression. Soon, it won’t matter what the physical disk capacity is
of a disk drive, it will be about a disks virtual disk capacity, what it
has the capability of storing that matters. It is time we all started
thinking this way.
“Storage Efficiency” has become a big topic over the past 12 months. There are a number of new technologies that have come out in the last few years that are helping to deal with storage growth. We all know that data is the root of the decisions that drive business today. The more data you have, hopefully, the better decisions you can make to drive your business to success. The question is, “what is the value (and hence the cost) of the infrastructure to create that success?” What we do know is that the ability to put more data in a highly efficient footprint can give your company a competitive edge. There are five technologies that can help an IT organization create an efficient storage infrastructure. These are:
3) Thin Provisioning
It is also important to point out that there are some semantics when talking about storage efficiency, specifically between efficiency and optimization technologies. I think it is useful to attempt to define these as they lead us to picking the right solutions for what we are trying to accomplish. For the purpose of this post, efficiency will relate to making existing capacity more useful and optimization will mean making more capacity out of existing capacity.
Using these definitions, technologies such as Tiering, Virtualization and Thin Provisioning are efficiency technologies. These technologies help to utilize the existing capacity that you have.
Tiering is technology that is used on about 10% of your data or less. It is used to move data that requires higher performance to flash storage. Good tiering technology analyzes data access patterns and moves the most active data to the highest performing disk. It doesn’t really change the amount of physical capacity that is required; it just changes whattypeof capacity is required and allows IT to make sure data is operating as fast and efficiently as possible.
Virtualization technology allows IT to make sure disk utilization is used as efficiently as possible. Until recently storage utilization rates were around 50%. By leveraging virtualization technology, IT can group pools of storage so they don’t need to purchase capacity needlessly. Virtualization can be used on 50% to 60% of your storage but it doesn’t change your physical capacity infrastructure requirements and at most allows users to take advantage of 20% to 40% of their capacity that they once didn’t access.
Similar to Virtualization technology, thin provisioning technology also can be used on 50% to 60% of your capacity however, thin provisioning technology gives IT about 10% to 40% of their capacity back. Thin Provisioning helps IT manage their existing capacity and their utilization by being able to make capacity available to users much easier again however it doesn’t change the amount of physical storage infrastructure required.
Optimization technologies help IT to better manage their physical storage footprint. Optimization technologies optimize existing infrastructure by allowing users to put more capacity in the physical same space. The two technologies that are currently used today are data deduplication and real-time compression.
Optimization technologies are a bit tricky. There is a balance that is required between optimization and performance and availability. At the end of the day, IT chooses the storage it buys with two very important characteristics in mind, performance and availability. Optimization technologies can not affect these characteristics. It is for this reason that data deduplication really isn’t ready for “prime time” on primary, active storage. Data deduplication creates too much of a performance impact on primary, active data. Today, data deduplication could be used on about 10% to 15% of the primary, less active capacity that is in the data center and only provides about 30% to 50% overall optimization. In other words deduplication technology can impact the physical infrastructure by as much as 10%, meaning IT may not need to buy as much physical capacity.
Real-time compression, on the other hand, has one of the most dramatic affects on primary storage capacity. Real-time compression can be used on as much as 85% of the storage footprint and can compress data between 50% and 80%. That said Real-time compression could have IT purchase as much as 70% less overall storage capacity. Real-time compression also does not affect the main characteristics for which users buy storage (performance and availability). IT could have as much as 70% less footprint but keep the same amount of data or more on-line. Additionally, IT can now purchase storage opportunistically without having to have such a dramatic impact on their infrastructure, process or budgets. This allows companies to keep more capacity on line and available to help companies do more analytics on more capacity and become more competitive.
When deciding which storage efficiency technology will have a more effective impact on your overall environment and budget, start with optimization technologies and start to get the data growth under control. Adding value to the line of business that can drive revenue with more data will make you a hero and your business more successful.
“Procedures for replacing or adding nodes to an existing cluster” Scope and Objectives The scope of this document is two fold. The first section provides a procedure for replacing existing nodes in a SVC cluster non-disruptively. For example, the current cluster consists of two 2145-8F4 nodes and the desire is to replace them with two 2145-CF8 nodes maintaining the cluster size at two nodes. The second section provides a procedure to add nodes to an existing cluster to expand the cluster to support additional workload. For example, the current cluster consists of two 2145-8G4 nodes and the desire is to grow it to a four node cluster by adding two 2145-CF8 nodes. The objective of this document is to provide greater detail on the steps required to perform the above procedures then is currently available in the SVC Software Installation and Configuration Guide, SC23-6628, located at www.ibm.com/storage/support/2145. In addition, it provides important information to assist the person performing the procedures to avoid problems while following the various steps. Section 1: Procedure to replace existing SVC nodes non-disruptively You can replace SAN Volume Controller 2145-8F2, 2145-8F4, 2145-8G4, and 2145-8A4 nodes with SAN Volume Controller 2145-CF8 nodes in an existing, active cluster without taking an outage on the SVC or on your host applications. In fact you can use this procedure to replace any model node with a different model node as long as the SVC software level supports that particular node model type. For example, you might want to replace a 2145-8F2 node in a test environment with a 2145-8G4 node previously in production that just got replaced by a new 2145-CF8 node. Note: If you are attempting to replace existing 2145-4F2 nodes with new 2145-CF8 nodes do not use this procedure as you must use the procedure specifically for this sort of upgrade located at the following URL: ftp://ftp.software.ibm.com/storage/san/sanvc/V5.1.0/pubs/multi/4F2MigrationVer1.pdf This procedure does not require changes to your SAN environment because the new node being installed uses the same worldwide node name (WWNN) as the node you are replacing. Since SVC uses this to generate the unique worldwide port name (WWPN), no SAN zoning or disk controller LUN masking changes are required.READ MORE>
United States Army Advances Ethernet Infrastructure to Optimize Applications and Deliver Mission-Critical Military Information
Brocade Improves Business Continuity With Non-Stop Networking and Maximum Performance
SAN JOSE, CA -- (MARKET WIRE) -- 06/01/11 --
Brocade (NASDAQ: BRCD) today announced it is working with the United States Army as part of the Installation Information Infrastructure Modernization Program (I3MP) at Fort Carson
to create a highly resilient network to support advanced voice, video
and critical military applications in an effort to modernize the base's
core enterprise information infrastructure. This installation represents
one of the largest core-to-edge deployments of 100 Gigabit Ethernet (GbE)-ready routers and 10 GbE aggregation and LAN switches.
Fort Carson, winner of the Network Enterprise Center (NEC) of the Year award, is a United States Army installation located in Colorado. Its 137,000-acre facility is home to critical members of the military, including the 4th Infantry Division, the 10th Special Forces Group, the 71st Ordnance Group (EOD), the 4th Engineer Battalion, the 759th Military Police Battalion, the 10th Combat Support Hospital, the 43rd Sustainment Brigade and the 13th Air Support Operations
Squadron of the United States Air Force. Due to the
sheer number of users requiring more bandwidth to support emerging forms
of external and inter-base network communication, Fort Carson
required an infrastructure refresh that would provide scalability for
growth while simplifying the delivery of latency-sensitive voice, video
This mission-critical imperative was successfully solved by deploying 100 GbE-ready Brocade® NetIron® XMR
Multiprotocol Label Switching (MPLS) IPv6-ready core routers as the
backbone of the network. The MPLS capabilities provide superior
efficiency, Quality of Service (QoS) and reduced latency times for
critical online applications and services. As a result, Fort Carson's
personnel can minimize network bottlenecks by prioritizing their
delay-sensitive traffic over a path with minimal hops and lower
congestion -- helping boost overall productivity and expedite response
to urgent situations.
In the federal government, network manageability is a top priority for
IT managers. A challenge has been deploying scalable solutions that are
cost-effective and do not degrade or impair network performance. Through
the use of Brocade IronView® Network Manager,
customers can leverage the power of sFlow scalability and wire-speed
operation to deliver a network-wide solution for detecting and
monitoring network traffic without impacting application performance.
This is a significant advantage over alternative network management
solutions that are limited in their scope and that can impact
performance when implemented as inline appliances.
The entire Brocade network solution meets the stringent Defense Information Systems Agency
(DISA) Joint Interoperability and Test Center (JITC) requirements. DISA
JITC's mission is to support the war-fighter with direct technical
assistance and to conduct performance and interoperability testing and
certification for net-centric strategic voice, video and data networking
systems integral to the Department of Defense (DoD) Global Information Grid.
"The selection of Brocade by the United States Army's I3MP
program is a significant win for Brocade, highlighting our proven
expertise in providing high-performance, non-stop networking solutions
to government organizations worldwide," said John McHugh,
chief marketing officer, Brocade. "By meeting the I3MP network and
service requirements, Brocade is well-positioned to further extend its
market presence within the government sector as a leading networking
provider to support and optimize mission-critical applications."
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's
leading organizations transition smoothly to a world where applications
and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Businesses continue to search for storage solutions that save money
without sacrificing performance. Last year, IBM introduced Scale Out
Network Attached Storage (SONAS), the industry’s first such
network-attached storage (NAS) offering to address this business need.
SONAS is an enterprise class, NAS system that provides extreme
scalability, availability and security—and does so with record-breaking
performance. It’s designed as a single global repository to manage
multiple petabytes of storage and billions of files all under one file
In April, IBM announced significant performance enhancements to
SONAS: improved information lifecycle management (ILM), hierarchical
storage management (HSM) as well as ease of deployment and antivirus
Todd Neville, SONAS program leader at IBM, says SONAS is unique in
that it can very near-linearly scale to almost any performance level.
With SONAS, he says, “You can build a system that’s as fast as you want
it to be; but it’s not just about absolute size, it’s also about bang
for your buck. We’ve significantly increased the software performance in
our upcoming release 1.2, so customers see a significant performance
increase on their current platform with no additional costs.”
Funda Eceral, SONAS market segment manager at IBM, says SONAS is the
only true scale-out NAS system available in the marketplace. “While you
can nondisruptively add capacity with storage building blocks,” Eceral
says, “you can also still continue to independently scale out your I/O
performance with interface nodes. It brings operational efficiency and
extraordinary utilization rates for each customer.”
Three Key Features
This version of SONAS offers three key features, according to Neville:
Ease of deployment. Using Network Data Management Protocol
(NDMP), a SONAS device can be easily integrated into existing
data-center backup infrastructures. “If you have an enterprise backup
deployment using NDMP, you will be able to take SONAS and quickly
connect with a wide variety of popular backup systems,” Neville says.
Built-in antivirus integration. Scalable NAS storage devices
must have a way for an antivirus function to perform scans on files
intelligently, such as when they’re opened or closed. SONAS includes a
built-in functionality that lets a third party like Symantec integrate
into the SONAS device to perform antivirus operations, as simple “full
file-system scans” become cumbersome at enterprise scales.
Physical size. Neville says customers asked IBM to make the
SONAS device more compact, although it supports almost a full petabyte
in a single rack, making it the only offering in IBM’s NAS portfolio
that can do so. It’s now 10 inches shorter than the original device, can
scale up to 14.4 petabytes (with 2 TB drives) and has a single point of
management, which can significantly reduce storage-administration
“Everyone says, ‘We do tiering, HSM and ILM,’ but design
matters—IBM does it differently.” —Todd Neville, SONAS program leader,
“Everyone says, ‘We do tiering, HSM and ILM,’ but design matters—IBM does it differently.” —Todd Neville, SONAS program leader, IBM Next Page >>